00:00:00.000 Started by upstream project "autotest-per-patch" build number 131289 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.060 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.125 Fetching changes from the remote Git repository 00:00:00.129 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.184 Using shallow fetch with depth 1 00:00:00.184 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.184 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.303 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.303 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.995 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.010 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.023 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:08.023 > git config core.sparsecheckout # timeout=10 00:00:08.034 > git read-tree -mu HEAD # timeout=10 00:00:08.049 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:08.066 Commit message: "packer: Fix typo in a package name" 00:00:08.066 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:08.145 [Pipeline] Start of Pipeline 00:00:08.158 [Pipeline] library 00:00:08.160 Loading library shm_lib@master 00:00:08.160 Library shm_lib@master is cached. Copying from home. 00:00:08.178 [Pipeline] node 00:00:08.186 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.188 [Pipeline] { 00:00:08.199 [Pipeline] catchError 00:00:08.200 [Pipeline] { 00:00:08.214 [Pipeline] wrap 00:00:08.225 [Pipeline] { 00:00:08.233 [Pipeline] stage 00:00:08.235 [Pipeline] { (Prologue) 00:00:08.446 [Pipeline] sh 00:00:08.726 + logger -p user.info -t JENKINS-CI 00:00:08.743 [Pipeline] echo 00:00:08.744 Node: WFP6 00:00:08.753 [Pipeline] sh 00:00:09.051 [Pipeline] setCustomBuildProperty 00:00:09.063 [Pipeline] echo 00:00:09.064 Cleanup processes 00:00:09.069 [Pipeline] sh 00:00:09.355 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.355 1824642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.367 [Pipeline] sh 00:00:09.651 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.651 ++ grep -v 'sudo pgrep' 00:00:09.651 ++ awk '{print $1}' 00:00:09.651 + sudo kill -9 00:00:09.651 + true 00:00:09.667 [Pipeline] cleanWs 00:00:09.678 [WS-CLEANUP] Deleting project workspace... 00:00:09.678 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.685 [WS-CLEANUP] done 00:00:09.690 [Pipeline] setCustomBuildProperty 00:00:09.712 [Pipeline] sh 00:00:10.034 + sudo git config --global --replace-all safe.directory '*' 00:00:10.126 [Pipeline] httpRequest 00:00:10.491 [Pipeline] echo 00:00:10.493 Sorcerer 10.211.164.101 is alive 00:00:10.504 [Pipeline] retry 00:00:10.506 [Pipeline] { 00:00:10.521 [Pipeline] httpRequest 00:00:10.525 HttpMethod: GET 00:00:10.526 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:10.526 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:10.533 Response Code: HTTP/1.1 200 OK 00:00:10.533 Success: Status code 200 is in the accepted range: 200,404 00:00:10.533 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:18.738 [Pipeline] } 00:00:18.756 [Pipeline] // retry 00:00:18.763 [Pipeline] sh 00:00:19.047 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:19.063 [Pipeline] httpRequest 00:00:19.450 [Pipeline] echo 00:00:19.452 Sorcerer 10.211.164.101 is alive 00:00:19.461 [Pipeline] retry 00:00:19.463 [Pipeline] { 00:00:19.477 [Pipeline] httpRequest 00:00:19.481 HttpMethod: GET 00:00:19.482 URL: http://10.211.164.101/packages/spdk_23f83d500281ba217c28487ccfee2426cc6bed81.tar.gz 00:00:19.482 Sending request to url: http://10.211.164.101/packages/spdk_23f83d500281ba217c28487ccfee2426cc6bed81.tar.gz 00:00:19.487 Response Code: HTTP/1.1 200 OK 00:00:19.488 Success: Status code 200 is in the accepted range: 200,404 00:00:19.488 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_23f83d500281ba217c28487ccfee2426cc6bed81.tar.gz 00:02:00.780 [Pipeline] } 00:02:00.794 [Pipeline] // retry 00:02:00.800 [Pipeline] sh 00:02:01.084 + tar --no-same-owner -xf spdk_23f83d500281ba217c28487ccfee2426cc6bed81.tar.gz 00:02:03.630 [Pipeline] sh 00:02:03.909 + git -C spdk log --oneline -n5 00:02:03.909 23f83d500 thread: add NUMA node support to spdk_iobuf_put() 00:02:03.909 3af18e093 env: add spdk_mem_get_numa_id 00:02:03.909 de13458b0 thread: allocate iobuf memory based on numa_id 00:02:03.909 02195b852 thread: update all iobuf non-get/put functions for multiple NUMA nodes 00:02:03.909 263cbb003 thread: create helper functions for iobuf_channel_init/free and abort 00:02:03.920 [Pipeline] } 00:02:03.934 [Pipeline] // stage 00:02:03.941 [Pipeline] stage 00:02:03.943 [Pipeline] { (Prepare) 00:02:03.957 [Pipeline] writeFile 00:02:03.970 [Pipeline] sh 00:02:04.248 + logger -p user.info -t JENKINS-CI 00:02:04.259 [Pipeline] sh 00:02:04.541 + logger -p user.info -t JENKINS-CI 00:02:04.551 [Pipeline] sh 00:02:04.833 + cat autorun-spdk.conf 00:02:04.833 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.833 SPDK_TEST_NVMF=1 00:02:04.833 SPDK_TEST_NVME_CLI=1 00:02:04.833 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.833 SPDK_TEST_NVMF_NICS=e810 00:02:04.833 SPDK_TEST_VFIOUSER=1 00:02:04.833 SPDK_RUN_UBSAN=1 00:02:04.833 NET_TYPE=phy 00:02:04.840 RUN_NIGHTLY=0 00:02:04.845 [Pipeline] readFile 00:02:04.867 [Pipeline] withEnv 00:02:04.869 [Pipeline] { 00:02:04.878 [Pipeline] sh 00:02:05.158 + set -ex 00:02:05.158 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:05.158 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.158 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.158 ++ SPDK_TEST_NVMF=1 00:02:05.158 ++ SPDK_TEST_NVME_CLI=1 00:02:05.158 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.158 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.158 ++ SPDK_TEST_VFIOUSER=1 00:02:05.158 ++ SPDK_RUN_UBSAN=1 00:02:05.158 ++ NET_TYPE=phy 00:02:05.158 ++ RUN_NIGHTLY=0 00:02:05.158 + case $SPDK_TEST_NVMF_NICS in 00:02:05.158 + DRIVERS=ice 00:02:05.158 + [[ tcp == \r\d\m\a ]] 00:02:05.158 + [[ -n ice ]] 00:02:05.158 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:05.158 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:05.158 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:05.158 rmmod: ERROR: Module irdma is not currently loaded 00:02:05.158 rmmod: ERROR: Module i40iw is not currently loaded 00:02:05.158 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:05.158 + true 00:02:05.158 + for D in $DRIVERS 00:02:05.158 + sudo modprobe ice 00:02:05.158 + exit 0 00:02:05.166 [Pipeline] } 00:02:05.179 [Pipeline] // withEnv 00:02:05.183 [Pipeline] } 00:02:05.196 [Pipeline] // stage 00:02:05.204 [Pipeline] catchError 00:02:05.206 [Pipeline] { 00:02:05.218 [Pipeline] timeout 00:02:05.218 Timeout set to expire in 1 hr 0 min 00:02:05.219 [Pipeline] { 00:02:05.232 [Pipeline] stage 00:02:05.234 [Pipeline] { (Tests) 00:02:05.247 [Pipeline] sh 00:02:05.530 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.530 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.530 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.530 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:05.530 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.530 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.530 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:05.530 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.530 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:05.530 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:05.530 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:05.530 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:05.530 + source /etc/os-release 00:02:05.530 ++ NAME='Fedora Linux' 00:02:05.530 ++ VERSION='39 (Cloud Edition)' 00:02:05.530 ++ ID=fedora 00:02:05.530 ++ VERSION_ID=39 00:02:05.530 ++ VERSION_CODENAME= 00:02:05.530 ++ PLATFORM_ID=platform:f39 00:02:05.530 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:05.530 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:05.530 ++ LOGO=fedora-logo-icon 00:02:05.530 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:05.530 ++ HOME_URL=https://fedoraproject.org/ 00:02:05.530 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:05.530 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:05.530 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:05.530 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:05.530 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:05.530 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:05.530 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:05.530 ++ SUPPORT_END=2024-11-12 00:02:05.530 ++ VARIANT='Cloud Edition' 00:02:05.530 ++ VARIANT_ID=cloud 00:02:05.530 + uname -a 00:02:05.530 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:05.530 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:08.066 Hugepages 00:02:08.066 node hugesize free / total 00:02:08.066 node0 1048576kB 0 / 0 00:02:08.066 node0 2048kB 0 / 0 00:02:08.066 node1 1048576kB 0 / 0 00:02:08.066 node1 2048kB 0 / 0 00:02:08.066 00:02:08.066 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:08.066 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:08.066 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:08.066 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:08.066 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:08.066 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:08.066 + rm -f /tmp/spdk-ld-path 00:02:08.066 + source autorun-spdk.conf 00:02:08.066 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.066 ++ SPDK_TEST_NVMF=1 00:02:08.066 ++ SPDK_TEST_NVME_CLI=1 00:02:08.066 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.066 ++ SPDK_TEST_NVMF_NICS=e810 00:02:08.066 ++ SPDK_TEST_VFIOUSER=1 00:02:08.066 ++ SPDK_RUN_UBSAN=1 00:02:08.066 ++ NET_TYPE=phy 00:02:08.066 ++ RUN_NIGHTLY=0 00:02:08.066 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:08.066 + [[ -n '' ]] 00:02:08.066 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:08.066 + for M in /var/spdk/build-*-manifest.txt 00:02:08.066 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:08.066 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:08.066 + for M in /var/spdk/build-*-manifest.txt 00:02:08.066 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:08.066 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:08.066 + for M in /var/spdk/build-*-manifest.txt 00:02:08.066 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:08.066 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:08.066 ++ uname 00:02:08.066 + [[ Linux == \L\i\n\u\x ]] 00:02:08.066 + sudo dmesg -T 00:02:08.326 + sudo dmesg --clear 00:02:08.326 + dmesg_pid=1826097 00:02:08.326 + [[ Fedora Linux == FreeBSD ]] 00:02:08.326 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.326 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:08.326 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:08.326 + [[ -x /usr/src/fio-static/fio ]] 00:02:08.326 + export FIO_BIN=/usr/src/fio-static/fio 00:02:08.326 + FIO_BIN=/usr/src/fio-static/fio 00:02:08.326 + sudo dmesg -Tw 00:02:08.326 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:08.326 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:08.326 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:08.326 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.326 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:08.326 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:08.326 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.326 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:08.326 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:08.326 Test configuration: 00:02:08.326 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:08.326 SPDK_TEST_NVMF=1 00:02:08.326 SPDK_TEST_NVME_CLI=1 00:02:08.326 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:08.326 SPDK_TEST_NVMF_NICS=e810 00:02:08.326 SPDK_TEST_VFIOUSER=1 00:02:08.326 SPDK_RUN_UBSAN=1 00:02:08.326 NET_TYPE=phy 00:02:08.326 RUN_NIGHTLY=0 19:09:31 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:08.326 19:09:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:08.326 19:09:31 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:08.326 19:09:31 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:08.326 19:09:31 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:08.326 19:09:31 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:08.326 19:09:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.326 19:09:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.326 19:09:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.326 19:09:31 -- paths/export.sh@5 -- $ export PATH 00:02:08.326 19:09:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:08.326 19:09:31 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:08.326 19:09:31 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:08.326 19:09:31 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729184971.XXXXXX 00:02:08.326 19:09:31 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729184971.4n8wIb 00:02:08.326 19:09:31 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:08.326 19:09:31 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:08.326 19:09:31 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:08.326 19:09:31 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:08.326 19:09:31 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:08.326 19:09:31 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:08.326 19:09:31 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:08.326 19:09:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.326 19:09:32 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:08.326 19:09:32 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:08.326 19:09:32 -- pm/common@17 -- $ local monitor 00:02:08.326 19:09:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.326 19:09:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.326 19:09:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.326 19:09:32 -- pm/common@21 -- $ date +%s 00:02:08.326 19:09:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:08.326 19:09:32 -- pm/common@21 -- $ date +%s 00:02:08.326 19:09:32 -- pm/common@25 -- $ sleep 1 00:02:08.326 19:09:32 -- pm/common@21 -- $ date +%s 00:02:08.326 19:09:32 -- pm/common@21 -- $ date +%s 00:02:08.326 19:09:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729184972 00:02:08.326 19:09:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729184972 00:02:08.326 19:09:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729184972 00:02:08.326 19:09:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729184972 00:02:08.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729184972_collect-cpu-load.pm.log 00:02:08.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729184972_collect-cpu-temp.pm.log 00:02:08.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729184972_collect-vmstat.pm.log 00:02:08.327 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729184972_collect-bmc-pm.bmc.pm.log 00:02:09.264 19:09:33 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:09.264 19:09:33 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:09.264 19:09:33 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:09.264 19:09:33 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:09.264 19:09:33 -- spdk/autobuild.sh@16 -- $ date -u 00:02:09.264 Thu Oct 17 05:09:33 PM UTC 2024 00:02:09.264 19:09:33 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:09.264 v25.01-pre-87-g23f83d500 00:02:09.264 19:09:33 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:09.264 19:09:33 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:09.264 19:09:33 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:09.264 19:09:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:09.264 19:09:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:09.264 19:09:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.523 ************************************ 00:02:09.523 START TEST ubsan 00:02:09.523 ************************************ 00:02:09.523 19:09:33 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:09.523 using ubsan 00:02:09.523 00:02:09.523 real 0m0.000s 00:02:09.523 user 0m0.000s 00:02:09.523 sys 0m0.000s 00:02:09.523 19:09:33 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:09.523 19:09:33 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:09.523 ************************************ 00:02:09.523 END TEST ubsan 00:02:09.523 ************************************ 00:02:09.523 19:09:33 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:09.523 19:09:33 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:09.523 19:09:33 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:09.523 19:09:33 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:09.523 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:09.523 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:10.089 Using 'verbs' RDMA provider 00:02:22.867 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:35.078 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:35.078 Creating mk/config.mk...done. 00:02:35.078 Creating mk/cc.flags.mk...done. 00:02:35.078 Type 'make' to build. 00:02:35.078 19:09:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:35.078 19:09:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:35.078 19:09:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:35.078 19:09:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.078 ************************************ 00:02:35.078 START TEST make 00:02:35.078 ************************************ 00:02:35.078 19:09:58 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:35.337 make[1]: Nothing to be done for 'all'. 00:02:36.725 The Meson build system 00:02:36.725 Version: 1.5.0 00:02:36.725 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:36.725 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.725 Build type: native build 00:02:36.725 Project name: libvfio-user 00:02:36.725 Project version: 0.0.1 00:02:36.725 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.725 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.725 Host machine cpu family: x86_64 00:02:36.725 Host machine cpu: x86_64 00:02:36.725 Run-time dependency threads found: YES 00:02:36.725 Library dl found: YES 00:02:36.725 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.725 Run-time dependency json-c found: YES 0.17 00:02:36.725 Run-time dependency cmocka found: YES 1.1.7 00:02:36.725 Program pytest-3 found: NO 00:02:36.725 Program flake8 found: NO 00:02:36.725 Program misspell-fixer found: NO 00:02:36.725 Program restructuredtext-lint found: NO 00:02:36.725 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.725 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.725 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.725 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.725 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.725 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.725 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.725 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.725 Build targets in project: 8 00:02:36.725 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.725 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.725 00:02:36.725 libvfio-user 0.0.1 00:02:36.725 00:02:36.725 User defined options 00:02:36.725 buildtype : debug 00:02:36.725 default_library: shared 00:02:36.725 libdir : /usr/local/lib 00:02:36.725 00:02:36.725 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.291 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:37.291 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:37.291 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:37.291 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:37.291 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:37.291 [5/37] Compiling C object samples/null.p/null.c.o 00:02:37.291 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:37.291 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:37.291 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:37.291 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:37.291 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:37.291 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:37.291 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:37.291 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:37.291 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:37.291 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:37.291 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:37.291 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:37.291 [18/37] Compiling C object samples/server.p/server.c.o 00:02:37.291 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:37.291 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:37.291 [21/37] Compiling C object samples/client.p/client.c.o 00:02:37.291 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:37.291 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:37.291 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:37.291 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:37.291 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:37.291 [27/37] Linking target samples/client 00:02:37.549 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:37.549 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:37.549 [30/37] Linking target test/unit_tests 00:02:37.549 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.806 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.806 [33/37] Linking target samples/server 00:02:37.806 [34/37] Linking target samples/lspci 00:02:37.806 [35/37] Linking target samples/null 00:02:37.806 [36/37] Linking target samples/gpio-pci-idio-16 00:02:37.806 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:37.806 INFO: autodetecting backend as ninja 00:02:37.806 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.806 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:38.065 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:38.065 ninja: no work to do. 00:02:43.340 The Meson build system 00:02:43.340 Version: 1.5.0 00:02:43.340 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:43.340 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:43.340 Build type: native build 00:02:43.340 Program cat found: YES (/usr/bin/cat) 00:02:43.340 Project name: DPDK 00:02:43.340 Project version: 24.03.0 00:02:43.340 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:43.340 C linker for the host machine: cc ld.bfd 2.40-14 00:02:43.340 Host machine cpu family: x86_64 00:02:43.340 Host machine cpu: x86_64 00:02:43.340 Message: ## Building in Developer Mode ## 00:02:43.340 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:43.340 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:43.340 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:43.340 Program python3 found: YES (/usr/bin/python3) 00:02:43.340 Program cat found: YES (/usr/bin/cat) 00:02:43.340 Compiler for C supports arguments -march=native: YES 00:02:43.340 Checking for size of "void *" : 8 00:02:43.340 Checking for size of "void *" : 8 (cached) 00:02:43.340 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:43.340 Library m found: YES 00:02:43.340 Library numa found: YES 00:02:43.340 Has header "numaif.h" : YES 00:02:43.340 Library fdt found: NO 00:02:43.340 Library execinfo found: NO 00:02:43.340 Has header "execinfo.h" : YES 00:02:43.340 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:43.340 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:43.340 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:43.340 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:43.340 Run-time dependency openssl found: YES 3.1.1 00:02:43.340 Run-time dependency libpcap found: YES 1.10.4 00:02:43.340 Has header "pcap.h" with dependency libpcap: YES 00:02:43.340 Compiler for C supports arguments -Wcast-qual: YES 00:02:43.340 Compiler for C supports arguments -Wdeprecated: YES 00:02:43.340 Compiler for C supports arguments -Wformat: YES 00:02:43.340 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:43.340 Compiler for C supports arguments -Wformat-security: NO 00:02:43.340 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:43.340 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:43.340 Compiler for C supports arguments -Wnested-externs: YES 00:02:43.340 Compiler for C supports arguments -Wold-style-definition: YES 00:02:43.340 Compiler for C supports arguments -Wpointer-arith: YES 00:02:43.340 Compiler for C supports arguments -Wsign-compare: YES 00:02:43.340 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:43.340 Compiler for C supports arguments -Wundef: YES 00:02:43.340 Compiler for C supports arguments -Wwrite-strings: YES 00:02:43.340 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:43.340 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:43.340 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:43.340 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:43.340 Program objdump found: YES (/usr/bin/objdump) 00:02:43.340 Compiler for C supports arguments -mavx512f: YES 00:02:43.340 Checking if "AVX512 checking" compiles: YES 00:02:43.340 Fetching value of define "__SSE4_2__" : 1 00:02:43.340 Fetching value of define "__AES__" : 1 00:02:43.340 Fetching value of define "__AVX__" : 1 00:02:43.340 Fetching value of define "__AVX2__" : 1 00:02:43.340 Fetching value of define "__AVX512BW__" : 1 00:02:43.340 Fetching value of define "__AVX512CD__" : 1 00:02:43.340 Fetching value of define "__AVX512DQ__" : 1 00:02:43.340 Fetching value of define "__AVX512F__" : 1 00:02:43.340 Fetching value of define "__AVX512VL__" : 1 00:02:43.340 Fetching value of define "__PCLMUL__" : 1 00:02:43.340 Fetching value of define "__RDRND__" : 1 00:02:43.340 Fetching value of define "__RDSEED__" : 1 00:02:43.340 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:43.340 Fetching value of define "__znver1__" : (undefined) 00:02:43.340 Fetching value of define "__znver2__" : (undefined) 00:02:43.340 Fetching value of define "__znver3__" : (undefined) 00:02:43.340 Fetching value of define "__znver4__" : (undefined) 00:02:43.340 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:43.340 Message: lib/log: Defining dependency "log" 00:02:43.340 Message: lib/kvargs: Defining dependency "kvargs" 00:02:43.340 Message: lib/telemetry: Defining dependency "telemetry" 00:02:43.340 Checking for function "getentropy" : NO 00:02:43.340 Message: lib/eal: Defining dependency "eal" 00:02:43.340 Message: lib/ring: Defining dependency "ring" 00:02:43.340 Message: lib/rcu: Defining dependency "rcu" 00:02:43.340 Message: lib/mempool: Defining dependency "mempool" 00:02:43.340 Message: lib/mbuf: Defining dependency "mbuf" 00:02:43.340 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:43.340 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:43.340 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:43.340 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:43.340 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:43.340 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:43.340 Compiler for C supports arguments -mpclmul: YES 00:02:43.340 Compiler for C supports arguments -maes: YES 00:02:43.340 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:43.340 Compiler for C supports arguments -mavx512bw: YES 00:02:43.340 Compiler for C supports arguments -mavx512dq: YES 00:02:43.340 Compiler for C supports arguments -mavx512vl: YES 00:02:43.340 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:43.340 Compiler for C supports arguments -mavx2: YES 00:02:43.340 Compiler for C supports arguments -mavx: YES 00:02:43.340 Message: lib/net: Defining dependency "net" 00:02:43.340 Message: lib/meter: Defining dependency "meter" 00:02:43.340 Message: lib/ethdev: Defining dependency "ethdev" 00:02:43.340 Message: lib/pci: Defining dependency "pci" 00:02:43.340 Message: lib/cmdline: Defining dependency "cmdline" 00:02:43.340 Message: lib/hash: Defining dependency "hash" 00:02:43.340 Message: lib/timer: Defining dependency "timer" 00:02:43.340 Message: lib/compressdev: Defining dependency "compressdev" 00:02:43.340 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:43.340 Message: lib/dmadev: Defining dependency "dmadev" 00:02:43.340 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:43.340 Message: lib/power: Defining dependency "power" 00:02:43.340 Message: lib/reorder: Defining dependency "reorder" 00:02:43.340 Message: lib/security: Defining dependency "security" 00:02:43.340 Has header "linux/userfaultfd.h" : YES 00:02:43.340 Has header "linux/vduse.h" : YES 00:02:43.340 Message: lib/vhost: Defining dependency "vhost" 00:02:43.340 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:43.340 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:43.340 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:43.340 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:43.340 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:43.340 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:43.340 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:43.340 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:43.340 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:43.340 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:43.340 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:43.340 Configuring doxy-api-html.conf using configuration 00:02:43.340 Configuring doxy-api-man.conf using configuration 00:02:43.340 Program mandb found: YES (/usr/bin/mandb) 00:02:43.340 Program sphinx-build found: NO 00:02:43.340 Configuring rte_build_config.h using configuration 00:02:43.340 Message: 00:02:43.340 ================= 00:02:43.340 Applications Enabled 00:02:43.340 ================= 00:02:43.340 00:02:43.340 apps: 00:02:43.340 00:02:43.340 00:02:43.340 Message: 00:02:43.340 ================= 00:02:43.340 Libraries Enabled 00:02:43.340 ================= 00:02:43.340 00:02:43.340 libs: 00:02:43.340 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:43.340 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:43.340 cryptodev, dmadev, power, reorder, security, vhost, 00:02:43.340 00:02:43.340 Message: 00:02:43.340 =============== 00:02:43.340 Drivers Enabled 00:02:43.340 =============== 00:02:43.340 00:02:43.340 common: 00:02:43.340 00:02:43.341 bus: 00:02:43.341 pci, vdev, 00:02:43.341 mempool: 00:02:43.341 ring, 00:02:43.341 dma: 00:02:43.341 00:02:43.341 net: 00:02:43.341 00:02:43.341 crypto: 00:02:43.341 00:02:43.341 compress: 00:02:43.341 00:02:43.341 vdpa: 00:02:43.341 00:02:43.341 00:02:43.341 Message: 00:02:43.341 ================= 00:02:43.341 Content Skipped 00:02:43.341 ================= 00:02:43.341 00:02:43.341 apps: 00:02:43.341 dumpcap: explicitly disabled via build config 00:02:43.341 graph: explicitly disabled via build config 00:02:43.341 pdump: explicitly disabled via build config 00:02:43.341 proc-info: explicitly disabled via build config 00:02:43.341 test-acl: explicitly disabled via build config 00:02:43.341 test-bbdev: explicitly disabled via build config 00:02:43.341 test-cmdline: explicitly disabled via build config 00:02:43.341 test-compress-perf: explicitly disabled via build config 00:02:43.341 test-crypto-perf: explicitly disabled via build config 00:02:43.341 test-dma-perf: explicitly disabled via build config 00:02:43.341 test-eventdev: explicitly disabled via build config 00:02:43.341 test-fib: explicitly disabled via build config 00:02:43.341 test-flow-perf: explicitly disabled via build config 00:02:43.341 test-gpudev: explicitly disabled via build config 00:02:43.341 test-mldev: explicitly disabled via build config 00:02:43.341 test-pipeline: explicitly disabled via build config 00:02:43.341 test-pmd: explicitly disabled via build config 00:02:43.341 test-regex: explicitly disabled via build config 00:02:43.341 test-sad: explicitly disabled via build config 00:02:43.341 test-security-perf: explicitly disabled via build config 00:02:43.341 00:02:43.341 libs: 00:02:43.341 argparse: explicitly disabled via build config 00:02:43.341 metrics: explicitly disabled via build config 00:02:43.341 acl: explicitly disabled via build config 00:02:43.341 bbdev: explicitly disabled via build config 00:02:43.341 bitratestats: explicitly disabled via build config 00:02:43.341 bpf: explicitly disabled via build config 00:02:43.341 cfgfile: explicitly disabled via build config 00:02:43.341 distributor: explicitly disabled via build config 00:02:43.341 efd: explicitly disabled via build config 00:02:43.341 eventdev: explicitly disabled via build config 00:02:43.341 dispatcher: explicitly disabled via build config 00:02:43.341 gpudev: explicitly disabled via build config 00:02:43.341 gro: explicitly disabled via build config 00:02:43.341 gso: explicitly disabled via build config 00:02:43.341 ip_frag: explicitly disabled via build config 00:02:43.341 jobstats: explicitly disabled via build config 00:02:43.341 latencystats: explicitly disabled via build config 00:02:43.341 lpm: explicitly disabled via build config 00:02:43.341 member: explicitly disabled via build config 00:02:43.341 pcapng: explicitly disabled via build config 00:02:43.341 rawdev: explicitly disabled via build config 00:02:43.341 regexdev: explicitly disabled via build config 00:02:43.341 mldev: explicitly disabled via build config 00:02:43.341 rib: explicitly disabled via build config 00:02:43.341 sched: explicitly disabled via build config 00:02:43.341 stack: explicitly disabled via build config 00:02:43.341 ipsec: explicitly disabled via build config 00:02:43.341 pdcp: explicitly disabled via build config 00:02:43.341 fib: explicitly disabled via build config 00:02:43.341 port: explicitly disabled via build config 00:02:43.341 pdump: explicitly disabled via build config 00:02:43.341 table: explicitly disabled via build config 00:02:43.341 pipeline: explicitly disabled via build config 00:02:43.341 graph: explicitly disabled via build config 00:02:43.341 node: explicitly disabled via build config 00:02:43.341 00:02:43.341 drivers: 00:02:43.341 common/cpt: not in enabled drivers build config 00:02:43.341 common/dpaax: not in enabled drivers build config 00:02:43.341 common/iavf: not in enabled drivers build config 00:02:43.341 common/idpf: not in enabled drivers build config 00:02:43.341 common/ionic: not in enabled drivers build config 00:02:43.341 common/mvep: not in enabled drivers build config 00:02:43.341 common/octeontx: not in enabled drivers build config 00:02:43.341 bus/auxiliary: not in enabled drivers build config 00:02:43.341 bus/cdx: not in enabled drivers build config 00:02:43.341 bus/dpaa: not in enabled drivers build config 00:02:43.341 bus/fslmc: not in enabled drivers build config 00:02:43.341 bus/ifpga: not in enabled drivers build config 00:02:43.341 bus/platform: not in enabled drivers build config 00:02:43.341 bus/uacce: not in enabled drivers build config 00:02:43.341 bus/vmbus: not in enabled drivers build config 00:02:43.341 common/cnxk: not in enabled drivers build config 00:02:43.341 common/mlx5: not in enabled drivers build config 00:02:43.341 common/nfp: not in enabled drivers build config 00:02:43.341 common/nitrox: not in enabled drivers build config 00:02:43.341 common/qat: not in enabled drivers build config 00:02:43.341 common/sfc_efx: not in enabled drivers build config 00:02:43.341 mempool/bucket: not in enabled drivers build config 00:02:43.341 mempool/cnxk: not in enabled drivers build config 00:02:43.341 mempool/dpaa: not in enabled drivers build config 00:02:43.341 mempool/dpaa2: not in enabled drivers build config 00:02:43.341 mempool/octeontx: not in enabled drivers build config 00:02:43.341 mempool/stack: not in enabled drivers build config 00:02:43.341 dma/cnxk: not in enabled drivers build config 00:02:43.341 dma/dpaa: not in enabled drivers build config 00:02:43.341 dma/dpaa2: not in enabled drivers build config 00:02:43.341 dma/hisilicon: not in enabled drivers build config 00:02:43.341 dma/idxd: not in enabled drivers build config 00:02:43.341 dma/ioat: not in enabled drivers build config 00:02:43.341 dma/skeleton: not in enabled drivers build config 00:02:43.341 net/af_packet: not in enabled drivers build config 00:02:43.341 net/af_xdp: not in enabled drivers build config 00:02:43.341 net/ark: not in enabled drivers build config 00:02:43.341 net/atlantic: not in enabled drivers build config 00:02:43.341 net/avp: not in enabled drivers build config 00:02:43.341 net/axgbe: not in enabled drivers build config 00:02:43.341 net/bnx2x: not in enabled drivers build config 00:02:43.341 net/bnxt: not in enabled drivers build config 00:02:43.341 net/bonding: not in enabled drivers build config 00:02:43.341 net/cnxk: not in enabled drivers build config 00:02:43.341 net/cpfl: not in enabled drivers build config 00:02:43.341 net/cxgbe: not in enabled drivers build config 00:02:43.341 net/dpaa: not in enabled drivers build config 00:02:43.341 net/dpaa2: not in enabled drivers build config 00:02:43.341 net/e1000: not in enabled drivers build config 00:02:43.341 net/ena: not in enabled drivers build config 00:02:43.341 net/enetc: not in enabled drivers build config 00:02:43.341 net/enetfec: not in enabled drivers build config 00:02:43.341 net/enic: not in enabled drivers build config 00:02:43.341 net/failsafe: not in enabled drivers build config 00:02:43.341 net/fm10k: not in enabled drivers build config 00:02:43.341 net/gve: not in enabled drivers build config 00:02:43.341 net/hinic: not in enabled drivers build config 00:02:43.341 net/hns3: not in enabled drivers build config 00:02:43.341 net/i40e: not in enabled drivers build config 00:02:43.341 net/iavf: not in enabled drivers build config 00:02:43.341 net/ice: not in enabled drivers build config 00:02:43.341 net/idpf: not in enabled drivers build config 00:02:43.341 net/igc: not in enabled drivers build config 00:02:43.341 net/ionic: not in enabled drivers build config 00:02:43.341 net/ipn3ke: not in enabled drivers build config 00:02:43.341 net/ixgbe: not in enabled drivers build config 00:02:43.341 net/mana: not in enabled drivers build config 00:02:43.341 net/memif: not in enabled drivers build config 00:02:43.341 net/mlx4: not in enabled drivers build config 00:02:43.341 net/mlx5: not in enabled drivers build config 00:02:43.341 net/mvneta: not in enabled drivers build config 00:02:43.341 net/mvpp2: not in enabled drivers build config 00:02:43.341 net/netvsc: not in enabled drivers build config 00:02:43.341 net/nfb: not in enabled drivers build config 00:02:43.341 net/nfp: not in enabled drivers build config 00:02:43.341 net/ngbe: not in enabled drivers build config 00:02:43.341 net/null: not in enabled drivers build config 00:02:43.341 net/octeontx: not in enabled drivers build config 00:02:43.341 net/octeon_ep: not in enabled drivers build config 00:02:43.341 net/pcap: not in enabled drivers build config 00:02:43.341 net/pfe: not in enabled drivers build config 00:02:43.341 net/qede: not in enabled drivers build config 00:02:43.341 net/ring: not in enabled drivers build config 00:02:43.341 net/sfc: not in enabled drivers build config 00:02:43.341 net/softnic: not in enabled drivers build config 00:02:43.341 net/tap: not in enabled drivers build config 00:02:43.341 net/thunderx: not in enabled drivers build config 00:02:43.341 net/txgbe: not in enabled drivers build config 00:02:43.341 net/vdev_netvsc: not in enabled drivers build config 00:02:43.341 net/vhost: not in enabled drivers build config 00:02:43.341 net/virtio: not in enabled drivers build config 00:02:43.341 net/vmxnet3: not in enabled drivers build config 00:02:43.341 raw/*: missing internal dependency, "rawdev" 00:02:43.341 crypto/armv8: not in enabled drivers build config 00:02:43.341 crypto/bcmfs: not in enabled drivers build config 00:02:43.341 crypto/caam_jr: not in enabled drivers build config 00:02:43.341 crypto/ccp: not in enabled drivers build config 00:02:43.341 crypto/cnxk: not in enabled drivers build config 00:02:43.341 crypto/dpaa_sec: not in enabled drivers build config 00:02:43.341 crypto/dpaa2_sec: not in enabled drivers build config 00:02:43.341 crypto/ipsec_mb: not in enabled drivers build config 00:02:43.341 crypto/mlx5: not in enabled drivers build config 00:02:43.341 crypto/mvsam: not in enabled drivers build config 00:02:43.341 crypto/nitrox: not in enabled drivers build config 00:02:43.341 crypto/null: not in enabled drivers build config 00:02:43.341 crypto/octeontx: not in enabled drivers build config 00:02:43.341 crypto/openssl: not in enabled drivers build config 00:02:43.341 crypto/scheduler: not in enabled drivers build config 00:02:43.341 crypto/uadk: not in enabled drivers build config 00:02:43.341 crypto/virtio: not in enabled drivers build config 00:02:43.341 compress/isal: not in enabled drivers build config 00:02:43.341 compress/mlx5: not in enabled drivers build config 00:02:43.341 compress/nitrox: not in enabled drivers build config 00:02:43.341 compress/octeontx: not in enabled drivers build config 00:02:43.341 compress/zlib: not in enabled drivers build config 00:02:43.341 regex/*: missing internal dependency, "regexdev" 00:02:43.341 ml/*: missing internal dependency, "mldev" 00:02:43.341 vdpa/ifc: not in enabled drivers build config 00:02:43.341 vdpa/mlx5: not in enabled drivers build config 00:02:43.341 vdpa/nfp: not in enabled drivers build config 00:02:43.341 vdpa/sfc: not in enabled drivers build config 00:02:43.341 event/*: missing internal dependency, "eventdev" 00:02:43.341 baseband/*: missing internal dependency, "bbdev" 00:02:43.341 gpu/*: missing internal dependency, "gpudev" 00:02:43.341 00:02:43.341 00:02:43.341 Build targets in project: 85 00:02:43.341 00:02:43.341 DPDK 24.03.0 00:02:43.341 00:02:43.341 User defined options 00:02:43.341 buildtype : debug 00:02:43.341 default_library : shared 00:02:43.341 libdir : lib 00:02:43.342 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:43.342 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:43.342 c_link_args : 00:02:43.342 cpu_instruction_set: native 00:02:43.342 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:43.342 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:43.342 enable_docs : false 00:02:43.342 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:43.342 enable_kmods : false 00:02:43.342 max_lcores : 128 00:02:43.342 tests : false 00:02:43.342 00:02:43.342 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.916 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:43.916 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.916 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.916 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.916 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.916 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.916 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.916 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.916 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.916 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.916 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.176 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.176 [12/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.176 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.176 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.176 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.176 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.176 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.176 [18/268] Linking static target lib/librte_kvargs.a 00:02:44.176 [19/268] Linking static target lib/librte_log.a 00:02:44.176 [20/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.176 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.176 [22/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.176 [23/268] Linking static target lib/librte_pci.a 00:02:44.176 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.442 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.442 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.442 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.442 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.442 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.442 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.442 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.442 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.442 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.442 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.442 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.442 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.442 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.442 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.442 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.442 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.442 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.442 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.442 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.442 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.442 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.442 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.442 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.442 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.442 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.442 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.442 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.442 [52/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.442 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.442 [54/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.442 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.442 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.442 [57/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.442 [58/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.442 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.442 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.442 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.442 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.442 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.442 [64/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.442 [65/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.442 [66/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.701 [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.702 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.702 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.702 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.702 [71/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.702 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.702 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.702 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.702 [75/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.702 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.702 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.702 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.702 [79/268] Linking static target lib/librte_meter.a 00:02:44.702 [80/268] Linking static target lib/librte_telemetry.a 00:02:44.702 [81/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.702 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:44.702 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.702 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.702 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.702 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.702 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.702 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.702 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.702 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.702 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.702 [92/268] Linking static target lib/librte_ring.a 00:02:44.702 [93/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.702 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.702 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.702 [96/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.702 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.702 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.702 [99/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.702 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.702 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.702 [102/268] Linking static target lib/librte_net.a 00:02:44.702 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.702 [104/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.702 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.702 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.702 [107/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.702 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.702 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.702 [110/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.702 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.702 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.702 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.702 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.702 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.702 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.702 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.702 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.702 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.702 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.702 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.702 [122/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.702 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.702 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.702 [125/268] Linking static target lib/librte_rcu.a 00:02:44.702 [126/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.702 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.702 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.702 [129/268] Linking static target lib/librte_cmdline.a 00:02:44.702 [130/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.702 [131/268] Linking static target lib/librte_mempool.a 00:02:44.702 [132/268] Linking static target lib/librte_eal.a 00:02:44.961 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.961 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.961 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.961 [136/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.961 [137/268] Linking static target lib/librte_mbuf.a 00:02:44.961 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.961 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.961 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.961 [141/268] Linking static target lib/librte_timer.a 00:02:44.961 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.961 [143/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.961 [144/268] Linking target lib/librte_log.so.24.1 00:02:44.961 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.961 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.961 [147/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.961 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.961 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.961 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.961 [151/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.961 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.961 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.961 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.961 [155/268] Linking static target lib/librte_dmadev.a 00:02:44.961 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.961 [157/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.961 [158/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.961 [159/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.961 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.961 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.220 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:45.220 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.220 [164/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:45.220 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.220 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.220 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.220 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.220 [169/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.220 [170/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:45.220 [171/268] Linking target lib/librte_telemetry.so.24.1 00:02:45.220 [172/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.220 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:45.220 [174/268] Linking target lib/librte_kvargs.so.24.1 00:02:45.220 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:45.220 [176/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.220 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:45.220 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:45.220 [179/268] Linking static target lib/librte_power.a 00:02:45.220 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.220 [181/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:45.220 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:45.220 [183/268] Linking static target lib/librte_compressdev.a 00:02:45.220 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.220 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:45.220 [186/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:45.220 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.220 [188/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.220 [189/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.220 [190/268] Linking static target drivers/librte_bus_vdev.a 00:02:45.220 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.220 [192/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.220 [193/268] Linking static target lib/librte_reorder.a 00:02:45.220 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:45.220 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.220 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:45.220 [197/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.220 [198/268] Linking static target lib/librte_hash.a 00:02:45.220 [199/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.220 [200/268] Linking static target lib/librte_security.a 00:02:45.479 [201/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.479 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.479 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.479 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.479 [205/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.479 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.479 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.479 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:45.479 [209/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.479 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.479 [211/268] Linking static target drivers/librte_mempool_ring.a 00:02:45.479 [212/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [213/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [214/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [215/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.737 [216/268] Linking static target lib/librte_cryptodev.a 00:02:45.737 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.737 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.737 [220/268] Linking static target lib/librte_ethdev.a 00:02:46.030 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.030 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.030 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.030 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:46.030 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.324 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.324 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.279 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.279 [229/268] Linking static target lib/librte_vhost.a 00:02:47.539 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.918 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.193 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.848 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.848 [234/268] Linking target lib/librte_eal.so.24.1 00:02:54.848 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:55.107 [236/268] Linking target lib/librte_pci.so.24.1 00:02:55.107 [237/268] Linking target lib/librte_ring.so.24.1 00:02:55.107 [238/268] Linking target lib/librte_meter.so.24.1 00:02:55.107 [239/268] Linking target lib/librte_dmadev.so.24.1 00:02:55.107 [240/268] Linking target lib/librte_timer.so.24.1 00:02:55.107 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:55.107 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:55.107 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:55.107 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:55.107 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:55.107 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:55.107 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:55.107 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:55.107 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:55.366 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:55.366 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:55.366 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:55.366 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:55.366 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:55.625 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:55.625 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:55.625 [257/268] Linking target lib/librte_net.so.24.1 00:02:55.625 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:55.625 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:55.625 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.625 [261/268] Linking target lib/librte_hash.so.24.1 00:02:55.625 [262/268] Linking target lib/librte_security.so.24.1 00:02:55.625 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.625 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.884 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:55.884 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:55.884 [267/268] Linking target lib/librte_power.so.24.1 00:02:55.884 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:55.884 INFO: autodetecting backend as ninja 00:02:55.884 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:03:08.128 CC lib/ut/ut.o 00:03:08.128 CC lib/ut_mock/mock.o 00:03:08.128 CC lib/log/log.o 00:03:08.128 CC lib/log/log_flags.o 00:03:08.128 CC lib/log/log_deprecated.o 00:03:08.128 LIB libspdk_ut.a 00:03:08.128 LIB libspdk_log.a 00:03:08.128 LIB libspdk_ut_mock.a 00:03:08.128 SO libspdk_ut.so.2.0 00:03:08.128 SO libspdk_log.so.7.1 00:03:08.128 SO libspdk_ut_mock.so.6.0 00:03:08.128 SYMLINK libspdk_ut.so 00:03:08.128 SYMLINK libspdk_ut_mock.so 00:03:08.128 SYMLINK libspdk_log.so 00:03:08.128 CC lib/dma/dma.o 00:03:08.128 CC lib/util/base64.o 00:03:08.128 CXX lib/trace_parser/trace.o 00:03:08.128 CC lib/util/bit_array.o 00:03:08.128 CC lib/ioat/ioat.o 00:03:08.128 CC lib/util/cpuset.o 00:03:08.128 CC lib/util/crc16.o 00:03:08.128 CC lib/util/crc32.o 00:03:08.128 CC lib/util/crc32c.o 00:03:08.128 CC lib/util/crc32_ieee.o 00:03:08.128 CC lib/util/crc64.o 00:03:08.128 CC lib/util/dif.o 00:03:08.128 CC lib/util/fd.o 00:03:08.128 CC lib/util/fd_group.o 00:03:08.128 CC lib/util/file.o 00:03:08.128 CC lib/util/hexlify.o 00:03:08.128 CC lib/util/iov.o 00:03:08.128 CC lib/util/math.o 00:03:08.128 CC lib/util/net.o 00:03:08.128 CC lib/util/pipe.o 00:03:08.128 CC lib/util/strerror_tls.o 00:03:08.128 CC lib/util/string.o 00:03:08.128 CC lib/util/uuid.o 00:03:08.128 CC lib/util/xor.o 00:03:08.128 CC lib/util/zipf.o 00:03:08.128 CC lib/util/md5.o 00:03:08.128 CC lib/vfio_user/host/vfio_user_pci.o 00:03:08.128 CC lib/vfio_user/host/vfio_user.o 00:03:08.128 LIB libspdk_dma.a 00:03:08.128 SO libspdk_dma.so.5.0 00:03:08.128 SYMLINK libspdk_dma.so 00:03:08.128 LIB libspdk_ioat.a 00:03:08.128 SO libspdk_ioat.so.7.0 00:03:08.128 SYMLINK libspdk_ioat.so 00:03:08.128 LIB libspdk_vfio_user.a 00:03:08.128 SO libspdk_vfio_user.so.5.0 00:03:08.128 LIB libspdk_util.a 00:03:08.128 SYMLINK libspdk_vfio_user.so 00:03:08.128 SO libspdk_util.so.10.0 00:03:08.128 SYMLINK libspdk_util.so 00:03:08.128 LIB libspdk_trace_parser.a 00:03:08.128 SO libspdk_trace_parser.so.6.0 00:03:08.128 SYMLINK libspdk_trace_parser.so 00:03:08.128 CC lib/vmd/vmd.o 00:03:08.128 CC lib/rdma_provider/common.o 00:03:08.128 CC lib/vmd/led.o 00:03:08.128 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.128 CC lib/json/json_parse.o 00:03:08.128 CC lib/json/json_util.o 00:03:08.128 CC lib/json/json_write.o 00:03:08.128 CC lib/conf/conf.o 00:03:08.128 CC lib/env_dpdk/env.o 00:03:08.128 CC lib/env_dpdk/memory.o 00:03:08.128 CC lib/env_dpdk/pci.o 00:03:08.128 CC lib/idxd/idxd.o 00:03:08.128 CC lib/env_dpdk/init.o 00:03:08.128 CC lib/rdma_utils/rdma_utils.o 00:03:08.128 CC lib/idxd/idxd_user.o 00:03:08.128 CC lib/env_dpdk/threads.o 00:03:08.128 CC lib/idxd/idxd_kernel.o 00:03:08.128 CC lib/env_dpdk/pci_ioat.o 00:03:08.128 CC lib/env_dpdk/pci_virtio.o 00:03:08.128 CC lib/env_dpdk/pci_vmd.o 00:03:08.128 CC lib/env_dpdk/pci_idxd.o 00:03:08.128 CC lib/env_dpdk/pci_event.o 00:03:08.128 CC lib/env_dpdk/sigbus_handler.o 00:03:08.128 CC lib/env_dpdk/pci_dpdk.o 00:03:08.128 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.128 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.128 LIB libspdk_rdma_provider.a 00:03:08.128 LIB libspdk_conf.a 00:03:08.128 SO libspdk_rdma_provider.so.6.0 00:03:08.128 SO libspdk_conf.so.6.0 00:03:08.386 LIB libspdk_json.a 00:03:08.386 LIB libspdk_rdma_utils.a 00:03:08.386 SYMLINK libspdk_conf.so 00:03:08.386 SYMLINK libspdk_rdma_provider.so 00:03:08.386 SO libspdk_rdma_utils.so.1.0 00:03:08.386 SO libspdk_json.so.6.0 00:03:08.386 SYMLINK libspdk_rdma_utils.so 00:03:08.386 SYMLINK libspdk_json.so 00:03:08.386 LIB libspdk_idxd.a 00:03:08.386 LIB libspdk_vmd.a 00:03:08.644 SO libspdk_idxd.so.12.1 00:03:08.644 SO libspdk_vmd.so.6.0 00:03:08.644 SYMLINK libspdk_idxd.so 00:03:08.644 SYMLINK libspdk_vmd.so 00:03:08.644 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.644 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.644 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.644 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.902 LIB libspdk_jsonrpc.a 00:03:08.902 SO libspdk_jsonrpc.so.6.0 00:03:08.902 SYMLINK libspdk_jsonrpc.so 00:03:09.161 LIB libspdk_env_dpdk.a 00:03:09.161 SO libspdk_env_dpdk.so.15.1 00:03:09.161 SYMLINK libspdk_env_dpdk.so 00:03:09.161 CC lib/rpc/rpc.o 00:03:09.420 LIB libspdk_rpc.a 00:03:09.420 SO libspdk_rpc.so.6.0 00:03:09.679 SYMLINK libspdk_rpc.so 00:03:09.937 CC lib/notify/notify.o 00:03:09.937 CC lib/notify/notify_rpc.o 00:03:09.937 CC lib/trace/trace.o 00:03:09.937 CC lib/keyring/keyring.o 00:03:09.937 CC lib/trace/trace_flags.o 00:03:09.937 CC lib/keyring/keyring_rpc.o 00:03:09.937 CC lib/trace/trace_rpc.o 00:03:09.937 LIB libspdk_notify.a 00:03:10.196 SO libspdk_notify.so.6.0 00:03:10.196 LIB libspdk_keyring.a 00:03:10.196 LIB libspdk_trace.a 00:03:10.196 SO libspdk_keyring.so.2.0 00:03:10.196 SYMLINK libspdk_notify.so 00:03:10.196 SO libspdk_trace.so.11.0 00:03:10.196 SYMLINK libspdk_keyring.so 00:03:10.196 SYMLINK libspdk_trace.so 00:03:10.454 CC lib/sock/sock.o 00:03:10.454 CC lib/sock/sock_rpc.o 00:03:10.454 CC lib/thread/thread.o 00:03:10.454 CC lib/thread/iobuf.o 00:03:10.713 LIB libspdk_sock.a 00:03:10.971 SO libspdk_sock.so.10.0 00:03:10.971 SYMLINK libspdk_sock.so 00:03:11.231 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:11.231 CC lib/nvme/nvme_ctrlr.o 00:03:11.231 CC lib/nvme/nvme_fabric.o 00:03:11.231 CC lib/nvme/nvme_ns_cmd.o 00:03:11.231 CC lib/nvme/nvme_ns.o 00:03:11.231 CC lib/nvme/nvme_pcie_common.o 00:03:11.231 CC lib/nvme/nvme_pcie.o 00:03:11.231 CC lib/nvme/nvme_qpair.o 00:03:11.231 CC lib/nvme/nvme.o 00:03:11.231 CC lib/nvme/nvme_quirks.o 00:03:11.231 CC lib/nvme/nvme_transport.o 00:03:11.231 CC lib/nvme/nvme_discovery.o 00:03:11.231 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.231 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.231 CC lib/nvme/nvme_tcp.o 00:03:11.231 CC lib/nvme/nvme_opal.o 00:03:11.231 CC lib/nvme/nvme_io_msg.o 00:03:11.231 CC lib/nvme/nvme_poll_group.o 00:03:11.231 CC lib/nvme/nvme_zns.o 00:03:11.231 CC lib/nvme/nvme_stubs.o 00:03:11.231 CC lib/nvme/nvme_auth.o 00:03:11.231 CC lib/nvme/nvme_cuse.o 00:03:11.231 CC lib/nvme/nvme_vfio_user.o 00:03:11.231 CC lib/nvme/nvme_rdma.o 00:03:11.490 LIB libspdk_thread.a 00:03:11.490 SO libspdk_thread.so.11.0 00:03:11.749 SYMLINK libspdk_thread.so 00:03:12.007 CC lib/blob/blobstore.o 00:03:12.007 CC lib/blob/request.o 00:03:12.007 CC lib/blob/zeroes.o 00:03:12.007 CC lib/blob/blob_bs_dev.o 00:03:12.007 CC lib/init/json_config.o 00:03:12.007 CC lib/init/rpc.o 00:03:12.007 CC lib/init/subsystem.o 00:03:12.007 CC lib/init/subsystem_rpc.o 00:03:12.007 CC lib/virtio/virtio.o 00:03:12.007 CC lib/virtio/virtio_vhost_user.o 00:03:12.007 CC lib/virtio/virtio_pci.o 00:03:12.007 CC lib/virtio/virtio_vfio_user.o 00:03:12.007 CC lib/accel/accel.o 00:03:12.007 CC lib/accel/accel_rpc.o 00:03:12.007 CC lib/fsdev/fsdev_rpc.o 00:03:12.007 CC lib/fsdev/fsdev.o 00:03:12.007 CC lib/accel/accel_sw.o 00:03:12.007 CC lib/vfu_tgt/tgt_endpoint.o 00:03:12.007 CC lib/fsdev/fsdev_io.o 00:03:12.007 CC lib/vfu_tgt/tgt_rpc.o 00:03:12.266 LIB libspdk_init.a 00:03:12.266 SO libspdk_init.so.6.0 00:03:12.266 LIB libspdk_vfu_tgt.a 00:03:12.266 LIB libspdk_virtio.a 00:03:12.266 SYMLINK libspdk_init.so 00:03:12.266 SO libspdk_vfu_tgt.so.3.0 00:03:12.266 SO libspdk_virtio.so.7.0 00:03:12.266 SYMLINK libspdk_vfu_tgt.so 00:03:12.266 SYMLINK libspdk_virtio.so 00:03:12.524 LIB libspdk_fsdev.a 00:03:12.524 SO libspdk_fsdev.so.1.0 00:03:12.524 CC lib/event/app.o 00:03:12.524 CC lib/event/reactor.o 00:03:12.524 CC lib/event/log_rpc.o 00:03:12.524 CC lib/event/app_rpc.o 00:03:12.524 CC lib/event/scheduler_static.o 00:03:12.524 SYMLINK libspdk_fsdev.so 00:03:12.783 LIB libspdk_accel.a 00:03:12.783 SO libspdk_accel.so.16.0 00:03:12.783 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.041 SYMLINK libspdk_accel.so 00:03:13.041 LIB libspdk_event.a 00:03:13.041 LIB libspdk_nvme.a 00:03:13.041 SO libspdk_event.so.14.0 00:03:13.041 SYMLINK libspdk_event.so 00:03:13.041 SO libspdk_nvme.so.14.0 00:03:13.299 CC lib/bdev/bdev.o 00:03:13.299 CC lib/bdev/bdev_rpc.o 00:03:13.299 CC lib/bdev/bdev_zone.o 00:03:13.299 CC lib/bdev/part.o 00:03:13.299 CC lib/bdev/scsi_nvme.o 00:03:13.299 SYMLINK libspdk_nvme.so 00:03:13.299 LIB libspdk_fuse_dispatcher.a 00:03:13.299 SO libspdk_fuse_dispatcher.so.1.0 00:03:13.558 SYMLINK libspdk_fuse_dispatcher.so 00:03:14.127 LIB libspdk_blob.a 00:03:14.127 SO libspdk_blob.so.11.0 00:03:14.127 SYMLINK libspdk_blob.so 00:03:14.695 CC lib/lvol/lvol.o 00:03:14.695 CC lib/blobfs/blobfs.o 00:03:14.695 CC lib/blobfs/tree.o 00:03:14.955 LIB libspdk_bdev.a 00:03:15.214 SO libspdk_bdev.so.17.0 00:03:15.214 LIB libspdk_blobfs.a 00:03:15.214 SO libspdk_blobfs.so.10.0 00:03:15.214 LIB libspdk_lvol.a 00:03:15.214 SO libspdk_lvol.so.10.0 00:03:15.214 SYMLINK libspdk_bdev.so 00:03:15.214 SYMLINK libspdk_blobfs.so 00:03:15.214 SYMLINK libspdk_lvol.so 00:03:15.473 CC lib/nbd/nbd.o 00:03:15.473 CC lib/nbd/nbd_rpc.o 00:03:15.473 CC lib/scsi/dev.o 00:03:15.473 CC lib/scsi/port.o 00:03:15.473 CC lib/scsi/lun.o 00:03:15.473 CC lib/scsi/scsi.o 00:03:15.473 CC lib/ublk/ublk.o 00:03:15.473 CC lib/scsi/scsi_bdev.o 00:03:15.473 CC lib/nvmf/ctrlr.o 00:03:15.473 CC lib/scsi/scsi_pr.o 00:03:15.473 CC lib/ublk/ublk_rpc.o 00:03:15.473 CC lib/nvmf/ctrlr_discovery.o 00:03:15.473 CC lib/scsi/scsi_rpc.o 00:03:15.473 CC lib/ftl/ftl_core.o 00:03:15.473 CC lib/nvmf/ctrlr_bdev.o 00:03:15.473 CC lib/ftl/ftl_init.o 00:03:15.473 CC lib/scsi/task.o 00:03:15.473 CC lib/nvmf/subsystem.o 00:03:15.473 CC lib/nvmf/nvmf.o 00:03:15.473 CC lib/ftl/ftl_debug.o 00:03:15.473 CC lib/ftl/ftl_layout.o 00:03:15.473 CC lib/nvmf/nvmf_rpc.o 00:03:15.473 CC lib/ftl/ftl_io.o 00:03:15.473 CC lib/nvmf/transport.o 00:03:15.473 CC lib/ftl/ftl_sb.o 00:03:15.473 CC lib/ftl/ftl_l2p.o 00:03:15.473 CC lib/nvmf/tcp.o 00:03:15.473 CC lib/ftl/ftl_l2p_flat.o 00:03:15.473 CC lib/nvmf/stubs.o 00:03:15.473 CC lib/nvmf/mdns_server.o 00:03:15.473 CC lib/ftl/ftl_nv_cache.o 00:03:15.473 CC lib/nvmf/vfio_user.o 00:03:15.473 CC lib/ftl/ftl_band.o 00:03:15.473 CC lib/nvmf/rdma.o 00:03:15.473 CC lib/ftl/ftl_band_ops.o 00:03:15.473 CC lib/nvmf/auth.o 00:03:15.473 CC lib/ftl/ftl_writer.o 00:03:15.473 CC lib/ftl/ftl_rq.o 00:03:15.473 CC lib/ftl/ftl_l2p_cache.o 00:03:15.473 CC lib/ftl/ftl_reloc.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:15.473 CC lib/ftl/ftl_p2l.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt.o 00:03:15.473 CC lib/ftl/ftl_p2l_log.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.473 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.473 CC lib/ftl/utils/ftl_md.o 00:03:15.473 CC lib/ftl/utils/ftl_conf.o 00:03:15.473 CC lib/ftl/utils/ftl_mempool.o 00:03:15.473 CC lib/ftl/utils/ftl_property.o 00:03:15.473 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.473 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.473 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.473 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.473 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.473 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.473 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:15.473 CC lib/ftl/base/ftl_base_dev.o 00:03:15.473 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:15.473 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.473 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.473 CC lib/ftl/ftl_trace.o 00:03:16.042 LIB libspdk_nbd.a 00:03:16.301 SO libspdk_nbd.so.7.0 00:03:16.301 LIB libspdk_ublk.a 00:03:16.301 SO libspdk_ublk.so.3.0 00:03:16.301 SYMLINK libspdk_nbd.so 00:03:16.301 LIB libspdk_scsi.a 00:03:16.301 SYMLINK libspdk_ublk.so 00:03:16.301 SO libspdk_scsi.so.9.0 00:03:16.301 SYMLINK libspdk_scsi.so 00:03:16.561 LIB libspdk_ftl.a 00:03:16.561 CC lib/iscsi/conn.o 00:03:16.561 CC lib/iscsi/iscsi.o 00:03:16.561 CC lib/iscsi/init_grp.o 00:03:16.561 CC lib/iscsi/param.o 00:03:16.561 CC lib/iscsi/portal_grp.o 00:03:16.561 CC lib/iscsi/tgt_node.o 00:03:16.561 CC lib/iscsi/iscsi_subsystem.o 00:03:16.561 CC lib/iscsi/task.o 00:03:16.561 CC lib/iscsi/iscsi_rpc.o 00:03:16.820 CC lib/vhost/vhost.o 00:03:16.820 CC lib/vhost/vhost_rpc.o 00:03:16.820 CC lib/vhost/vhost_scsi.o 00:03:16.820 CC lib/vhost/vhost_blk.o 00:03:16.820 CC lib/vhost/rte_vhost_user.o 00:03:16.820 SO libspdk_ftl.so.9.0 00:03:17.078 SYMLINK libspdk_ftl.so 00:03:17.078 LIB libspdk_nvmf.a 00:03:17.336 SO libspdk_nvmf.so.20.0 00:03:17.336 SYMLINK libspdk_nvmf.so 00:03:17.594 LIB libspdk_vhost.a 00:03:17.594 SO libspdk_vhost.so.8.0 00:03:17.594 SYMLINK libspdk_vhost.so 00:03:17.594 LIB libspdk_iscsi.a 00:03:17.852 SO libspdk_iscsi.so.8.0 00:03:17.852 SYMLINK libspdk_iscsi.so 00:03:18.418 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.418 CC module/vfu_device/vfu_virtio.o 00:03:18.418 CC module/vfu_device/vfu_virtio_blk.o 00:03:18.418 CC module/vfu_device/vfu_virtio_scsi.o 00:03:18.418 CC module/vfu_device/vfu_virtio_rpc.o 00:03:18.418 CC module/vfu_device/vfu_virtio_fs.o 00:03:18.418 CC module/accel/iaa/accel_iaa.o 00:03:18.418 CC module/blob/bdev/blob_bdev.o 00:03:18.418 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.418 CC module/accel/ioat/accel_ioat.o 00:03:18.418 CC module/accel/ioat/accel_ioat_rpc.o 00:03:18.418 CC module/sock/posix/posix.o 00:03:18.418 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.418 CC module/accel/error/accel_error.o 00:03:18.418 CC module/accel/error/accel_error_rpc.o 00:03:18.418 CC module/keyring/file/keyring.o 00:03:18.418 CC module/keyring/file/keyring_rpc.o 00:03:18.418 CC module/fsdev/aio/fsdev_aio.o 00:03:18.418 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:18.418 CC module/fsdev/aio/linux_aio_mgr.o 00:03:18.418 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.418 CC module/scheduler/gscheduler/gscheduler.o 00:03:18.418 CC module/accel/dsa/accel_dsa.o 00:03:18.418 CC module/accel/dsa/accel_dsa_rpc.o 00:03:18.418 CC module/keyring/linux/keyring.o 00:03:18.418 CC module/keyring/linux/keyring_rpc.o 00:03:18.418 LIB libspdk_env_dpdk_rpc.a 00:03:18.676 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.676 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.676 LIB libspdk_keyring_file.a 00:03:18.676 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.676 LIB libspdk_scheduler_gscheduler.a 00:03:18.676 LIB libspdk_keyring_linux.a 00:03:18.676 LIB libspdk_accel_ioat.a 00:03:18.676 LIB libspdk_accel_error.a 00:03:18.676 SO libspdk_keyring_file.so.2.0 00:03:18.676 LIB libspdk_scheduler_dynamic.a 00:03:18.676 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.676 LIB libspdk_accel_iaa.a 00:03:18.676 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.676 SO libspdk_keyring_linux.so.1.0 00:03:18.676 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.676 SO libspdk_accel_ioat.so.6.0 00:03:18.676 SO libspdk_accel_iaa.so.3.0 00:03:18.676 SO libspdk_accel_error.so.2.0 00:03:18.676 SYMLINK libspdk_keyring_file.so 00:03:18.676 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.676 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.676 LIB libspdk_blob_bdev.a 00:03:18.676 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.934 SYMLINK libspdk_accel_ioat.so 00:03:18.934 SYMLINK libspdk_keyring_linux.so 00:03:18.934 SYMLINK libspdk_accel_error.so 00:03:18.934 LIB libspdk_accel_dsa.a 00:03:18.934 SYMLINK libspdk_accel_iaa.so 00:03:18.934 SO libspdk_blob_bdev.so.11.0 00:03:18.934 SO libspdk_accel_dsa.so.5.0 00:03:18.934 SYMLINK libspdk_blob_bdev.so 00:03:18.934 SYMLINK libspdk_accel_dsa.so 00:03:18.934 LIB libspdk_vfu_device.a 00:03:18.934 SO libspdk_vfu_device.so.3.0 00:03:18.934 SYMLINK libspdk_vfu_device.so 00:03:18.934 LIB libspdk_fsdev_aio.a 00:03:19.192 SO libspdk_fsdev_aio.so.1.0 00:03:19.192 LIB libspdk_sock_posix.a 00:03:19.192 SO libspdk_sock_posix.so.6.0 00:03:19.192 SYMLINK libspdk_fsdev_aio.so 00:03:19.192 SYMLINK libspdk_sock_posix.so 00:03:19.192 CC module/bdev/delay/vbdev_delay.o 00:03:19.192 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.450 CC module/bdev/gpt/gpt.o 00:03:19.450 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.450 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.450 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:19.450 CC module/bdev/null/bdev_null.o 00:03:19.450 CC module/bdev/error/vbdev_error.o 00:03:19.450 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.450 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.450 CC module/bdev/null/bdev_null_rpc.o 00:03:19.450 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.450 CC module/bdev/malloc/bdev_malloc.o 00:03:19.450 CC module/bdev/split/vbdev_split.o 00:03:19.450 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.450 CC module/bdev/split/vbdev_split_rpc.o 00:03:19.450 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.450 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.450 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:19.450 CC module/bdev/raid/bdev_raid.o 00:03:19.450 CC module/bdev/raid/bdev_raid_rpc.o 00:03:19.450 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:19.450 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:19.450 CC module/bdev/raid/bdev_raid_sb.o 00:03:19.450 CC module/bdev/raid/raid0.o 00:03:19.450 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:19.450 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.450 CC module/bdev/raid/raid1.o 00:03:19.450 CC module/bdev/raid/concat.o 00:03:19.450 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.450 CC module/bdev/ftl/bdev_ftl.o 00:03:19.450 CC module/bdev/aio/bdev_aio.o 00:03:19.450 CC module/bdev/aio/bdev_aio_rpc.o 00:03:19.450 CC module/bdev/nvme/bdev_nvme.o 00:03:19.450 CC module/bdev/nvme/nvme_rpc.o 00:03:19.450 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:19.450 CC module/bdev/iscsi/bdev_iscsi.o 00:03:19.450 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:19.450 CC module/bdev/nvme/bdev_mdns_client.o 00:03:19.450 CC module/bdev/nvme/vbdev_opal.o 00:03:19.450 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:19.450 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:19.708 LIB libspdk_blobfs_bdev.a 00:03:19.708 LIB libspdk_bdev_split.a 00:03:19.708 SO libspdk_blobfs_bdev.so.6.0 00:03:19.708 LIB libspdk_bdev_null.a 00:03:19.708 SO libspdk_bdev_split.so.6.0 00:03:19.708 LIB libspdk_bdev_error.a 00:03:19.708 LIB libspdk_bdev_gpt.a 00:03:19.708 SO libspdk_bdev_null.so.6.0 00:03:19.708 SO libspdk_bdev_gpt.so.6.0 00:03:19.708 SYMLINK libspdk_blobfs_bdev.so 00:03:19.708 SO libspdk_bdev_error.so.6.0 00:03:19.708 SYMLINK libspdk_bdev_split.so 00:03:19.708 LIB libspdk_bdev_ftl.a 00:03:19.708 LIB libspdk_bdev_passthru.a 00:03:19.708 SYMLINK libspdk_bdev_null.so 00:03:19.708 SO libspdk_bdev_ftl.so.6.0 00:03:19.708 LIB libspdk_bdev_zone_block.a 00:03:19.708 LIB libspdk_bdev_delay.a 00:03:19.708 LIB libspdk_bdev_malloc.a 00:03:19.708 SO libspdk_bdev_passthru.so.6.0 00:03:19.708 SYMLINK libspdk_bdev_error.so 00:03:19.708 SYMLINK libspdk_bdev_gpt.so 00:03:19.708 LIB libspdk_bdev_aio.a 00:03:19.708 SO libspdk_bdev_delay.so.6.0 00:03:19.708 LIB libspdk_bdev_iscsi.a 00:03:19.708 SO libspdk_bdev_zone_block.so.6.0 00:03:19.708 SO libspdk_bdev_malloc.so.6.0 00:03:19.708 SO libspdk_bdev_aio.so.6.0 00:03:19.708 SYMLINK libspdk_bdev_passthru.so 00:03:19.708 SYMLINK libspdk_bdev_ftl.so 00:03:19.708 SO libspdk_bdev_iscsi.so.6.0 00:03:19.966 LIB libspdk_bdev_lvol.a 00:03:19.966 SYMLINK libspdk_bdev_delay.so 00:03:19.966 SYMLINK libspdk_bdev_malloc.so 00:03:19.966 SYMLINK libspdk_bdev_zone_block.so 00:03:19.966 SYMLINK libspdk_bdev_aio.so 00:03:19.966 SO libspdk_bdev_lvol.so.6.0 00:03:19.966 SYMLINK libspdk_bdev_iscsi.so 00:03:19.966 LIB libspdk_bdev_virtio.a 00:03:19.966 SO libspdk_bdev_virtio.so.6.0 00:03:19.966 SYMLINK libspdk_bdev_lvol.so 00:03:19.966 SYMLINK libspdk_bdev_virtio.so 00:03:20.224 LIB libspdk_bdev_raid.a 00:03:20.224 SO libspdk_bdev_raid.so.6.0 00:03:20.224 SYMLINK libspdk_bdev_raid.so 00:03:21.162 LIB libspdk_bdev_nvme.a 00:03:21.162 SO libspdk_bdev_nvme.so.7.0 00:03:21.162 SYMLINK libspdk_bdev_nvme.so 00:03:21.730 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.730 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.730 CC module/event/subsystems/sock/sock.o 00:03:21.989 CC module/event/subsystems/vmd/vmd.o 00:03:21.989 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.989 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.989 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.989 CC module/event/subsystems/keyring/keyring.o 00:03:21.989 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:21.989 CC module/event/subsystems/fsdev/fsdev.o 00:03:21.989 LIB libspdk_event_scheduler.a 00:03:21.989 LIB libspdk_event_fsdev.a 00:03:21.989 LIB libspdk_event_keyring.a 00:03:21.989 LIB libspdk_event_vhost_blk.a 00:03:21.989 LIB libspdk_event_sock.a 00:03:21.989 LIB libspdk_event_vmd.a 00:03:21.989 SO libspdk_event_scheduler.so.4.0 00:03:21.989 LIB libspdk_event_iobuf.a 00:03:21.989 LIB libspdk_event_vfu_tgt.a 00:03:21.989 SO libspdk_event_fsdev.so.1.0 00:03:21.989 SO libspdk_event_keyring.so.1.0 00:03:21.989 SO libspdk_event_vhost_blk.so.3.0 00:03:21.989 SO libspdk_event_sock.so.5.0 00:03:21.989 SO libspdk_event_vmd.so.6.0 00:03:21.989 SO libspdk_event_vfu_tgt.so.3.0 00:03:21.989 SO libspdk_event_iobuf.so.3.0 00:03:21.989 SYMLINK libspdk_event_scheduler.so 00:03:21.989 SYMLINK libspdk_event_vhost_blk.so 00:03:21.989 SYMLINK libspdk_event_fsdev.so 00:03:21.989 SYMLINK libspdk_event_keyring.so 00:03:21.989 SYMLINK libspdk_event_vmd.so 00:03:21.989 SYMLINK libspdk_event_sock.so 00:03:22.249 SYMLINK libspdk_event_vfu_tgt.so 00:03:22.249 SYMLINK libspdk_event_iobuf.so 00:03:22.508 CC module/event/subsystems/accel/accel.o 00:03:22.508 LIB libspdk_event_accel.a 00:03:22.508 SO libspdk_event_accel.so.6.0 00:03:22.767 SYMLINK libspdk_event_accel.so 00:03:23.026 CC module/event/subsystems/bdev/bdev.o 00:03:23.026 LIB libspdk_event_bdev.a 00:03:23.285 SO libspdk_event_bdev.so.6.0 00:03:23.285 SYMLINK libspdk_event_bdev.so 00:03:23.544 CC module/event/subsystems/ublk/ublk.o 00:03:23.544 CC module/event/subsystems/scsi/scsi.o 00:03:23.544 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.544 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.544 CC module/event/subsystems/nbd/nbd.o 00:03:23.544 LIB libspdk_event_scsi.a 00:03:23.803 LIB libspdk_event_ublk.a 00:03:23.803 LIB libspdk_event_nbd.a 00:03:23.804 SO libspdk_event_scsi.so.6.0 00:03:23.804 SO libspdk_event_ublk.so.3.0 00:03:23.804 SO libspdk_event_nbd.so.6.0 00:03:23.804 LIB libspdk_event_nvmf.a 00:03:23.804 SYMLINK libspdk_event_scsi.so 00:03:23.804 SYMLINK libspdk_event_ublk.so 00:03:23.804 SYMLINK libspdk_event_nbd.so 00:03:23.804 SO libspdk_event_nvmf.so.6.0 00:03:23.804 SYMLINK libspdk_event_nvmf.so 00:03:24.062 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.062 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.322 LIB libspdk_event_vhost_scsi.a 00:03:24.322 LIB libspdk_event_iscsi.a 00:03:24.322 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.322 SO libspdk_event_iscsi.so.6.0 00:03:24.322 SYMLINK libspdk_event_vhost_scsi.so 00:03:24.322 SYMLINK libspdk_event_iscsi.so 00:03:24.581 SO libspdk.so.6.0 00:03:24.581 SYMLINK libspdk.so 00:03:24.841 CXX app/trace/trace.o 00:03:24.841 CC app/trace_record/trace_record.o 00:03:24.841 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.841 CC app/spdk_nvme_perf/perf.o 00:03:24.841 CC app/spdk_nvme_identify/identify.o 00:03:24.841 CC app/spdk_top/spdk_top.o 00:03:24.841 CC test/rpc_client/rpc_client_test.o 00:03:24.841 TEST_HEADER include/spdk/accel.h 00:03:24.841 CC app/spdk_lspci/spdk_lspci.o 00:03:24.841 TEST_HEADER include/spdk/accel_module.h 00:03:24.841 TEST_HEADER include/spdk/assert.h 00:03:24.841 TEST_HEADER include/spdk/barrier.h 00:03:24.841 TEST_HEADER include/spdk/base64.h 00:03:24.841 TEST_HEADER include/spdk/bdev.h 00:03:24.841 TEST_HEADER include/spdk/bdev_module.h 00:03:24.841 TEST_HEADER include/spdk/bit_array.h 00:03:24.841 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.841 TEST_HEADER include/spdk/bit_pool.h 00:03:24.841 TEST_HEADER include/spdk/blobfs.h 00:03:24.841 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.841 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.841 TEST_HEADER include/spdk/blob.h 00:03:24.841 TEST_HEADER include/spdk/conf.h 00:03:24.841 TEST_HEADER include/spdk/config.h 00:03:24.841 TEST_HEADER include/spdk/cpuset.h 00:03:24.841 TEST_HEADER include/spdk/crc16.h 00:03:24.841 TEST_HEADER include/spdk/crc64.h 00:03:24.841 TEST_HEADER include/spdk/dif.h 00:03:24.841 TEST_HEADER include/spdk/crc32.h 00:03:24.841 TEST_HEADER include/spdk/dma.h 00:03:24.841 TEST_HEADER include/spdk/endian.h 00:03:24.841 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.841 TEST_HEADER include/spdk/env.h 00:03:24.841 TEST_HEADER include/spdk/event.h 00:03:24.841 TEST_HEADER include/spdk/fd_group.h 00:03:24.841 TEST_HEADER include/spdk/fd.h 00:03:24.841 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.841 CC app/spdk_dd/spdk_dd.o 00:03:24.841 TEST_HEADER include/spdk/file.h 00:03:24.841 TEST_HEADER include/spdk/fsdev.h 00:03:24.841 TEST_HEADER include/spdk/fsdev_module.h 00:03:24.841 TEST_HEADER include/spdk/ftl.h 00:03:24.841 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:24.841 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.841 TEST_HEADER include/spdk/hexlify.h 00:03:24.841 TEST_HEADER include/spdk/histogram_data.h 00:03:24.841 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.841 TEST_HEADER include/spdk/init.h 00:03:24.841 TEST_HEADER include/spdk/idxd.h 00:03:24.841 TEST_HEADER include/spdk/ioat.h 00:03:24.841 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.841 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.841 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.841 TEST_HEADER include/spdk/json.h 00:03:24.841 CC app/nvmf_tgt/nvmf_main.o 00:03:24.841 TEST_HEADER include/spdk/keyring.h 00:03:24.841 TEST_HEADER include/spdk/keyring_module.h 00:03:24.841 TEST_HEADER include/spdk/log.h 00:03:24.841 TEST_HEADER include/spdk/likely.h 00:03:24.841 TEST_HEADER include/spdk/md5.h 00:03:24.841 TEST_HEADER include/spdk/lvol.h 00:03:24.841 TEST_HEADER include/spdk/mmio.h 00:03:24.841 TEST_HEADER include/spdk/memory.h 00:03:24.841 TEST_HEADER include/spdk/nbd.h 00:03:24.841 TEST_HEADER include/spdk/notify.h 00:03:24.841 TEST_HEADER include/spdk/net.h 00:03:24.841 TEST_HEADER include/spdk/nvme.h 00:03:24.841 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.841 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.841 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.841 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.841 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.841 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.841 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.841 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.841 TEST_HEADER include/spdk/nvmf.h 00:03:24.841 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.841 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.841 TEST_HEADER include/spdk/opal_spec.h 00:03:24.841 TEST_HEADER include/spdk/pci_ids.h 00:03:24.841 TEST_HEADER include/spdk/opal.h 00:03:24.841 CC app/spdk_tgt/spdk_tgt.o 00:03:24.841 TEST_HEADER include/spdk/pipe.h 00:03:24.841 TEST_HEADER include/spdk/reduce.h 00:03:24.841 TEST_HEADER include/spdk/rpc.h 00:03:24.841 TEST_HEADER include/spdk/queue.h 00:03:24.841 TEST_HEADER include/spdk/scsi.h 00:03:24.841 TEST_HEADER include/spdk/scheduler.h 00:03:24.841 TEST_HEADER include/spdk/sock.h 00:03:24.841 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.841 TEST_HEADER include/spdk/stdinc.h 00:03:24.841 TEST_HEADER include/spdk/string.h 00:03:24.841 TEST_HEADER include/spdk/thread.h 00:03:24.841 TEST_HEADER include/spdk/trace_parser.h 00:03:24.841 TEST_HEADER include/spdk/trace.h 00:03:24.841 TEST_HEADER include/spdk/tree.h 00:03:24.841 TEST_HEADER include/spdk/ublk.h 00:03:24.841 TEST_HEADER include/spdk/util.h 00:03:24.841 TEST_HEADER include/spdk/uuid.h 00:03:24.841 TEST_HEADER include/spdk/version.h 00:03:24.841 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:24.841 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:24.841 TEST_HEADER include/spdk/vmd.h 00:03:24.841 TEST_HEADER include/spdk/vhost.h 00:03:24.841 TEST_HEADER include/spdk/xor.h 00:03:24.841 TEST_HEADER include/spdk/zipf.h 00:03:24.841 CXX test/cpp_headers/accel.o 00:03:24.841 CXX test/cpp_headers/accel_module.o 00:03:24.841 CXX test/cpp_headers/assert.o 00:03:24.841 CXX test/cpp_headers/barrier.o 00:03:24.841 CXX test/cpp_headers/base64.o 00:03:24.841 CXX test/cpp_headers/bdev.o 00:03:24.841 CXX test/cpp_headers/bdev_zone.o 00:03:24.841 CXX test/cpp_headers/bdev_module.o 00:03:24.841 CXX test/cpp_headers/bit_array.o 00:03:24.841 CXX test/cpp_headers/bit_pool.o 00:03:24.841 CXX test/cpp_headers/blob_bdev.o 00:03:24.841 CXX test/cpp_headers/blob.o 00:03:24.841 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.841 CXX test/cpp_headers/blobfs.o 00:03:24.841 CXX test/cpp_headers/conf.o 00:03:24.841 CXX test/cpp_headers/crc16.o 00:03:24.841 CXX test/cpp_headers/config.o 00:03:24.841 CXX test/cpp_headers/cpuset.o 00:03:24.841 CXX test/cpp_headers/crc32.o 00:03:24.841 CXX test/cpp_headers/dif.o 00:03:24.841 CXX test/cpp_headers/crc64.o 00:03:24.841 CXX test/cpp_headers/dma.o 00:03:24.841 CXX test/cpp_headers/endian.o 00:03:24.841 CXX test/cpp_headers/env_dpdk.o 00:03:24.841 CXX test/cpp_headers/event.o 00:03:24.841 CXX test/cpp_headers/env.o 00:03:24.841 CXX test/cpp_headers/fd.o 00:03:24.841 CXX test/cpp_headers/fd_group.o 00:03:24.841 CXX test/cpp_headers/fsdev.o 00:03:24.841 CXX test/cpp_headers/fsdev_module.o 00:03:24.841 CXX test/cpp_headers/file.o 00:03:24.841 CXX test/cpp_headers/fuse_dispatcher.o 00:03:24.841 CXX test/cpp_headers/ftl.o 00:03:24.841 CXX test/cpp_headers/gpt_spec.o 00:03:24.841 CXX test/cpp_headers/histogram_data.o 00:03:24.841 CXX test/cpp_headers/hexlify.o 00:03:24.841 CXX test/cpp_headers/idxd.o 00:03:24.841 CXX test/cpp_headers/idxd_spec.o 00:03:24.841 CXX test/cpp_headers/ioat_spec.o 00:03:24.841 CXX test/cpp_headers/init.o 00:03:24.841 CXX test/cpp_headers/ioat.o 00:03:24.841 CXX test/cpp_headers/json.o 00:03:24.841 CXX test/cpp_headers/iscsi_spec.o 00:03:24.841 CXX test/cpp_headers/jsonrpc.o 00:03:24.841 CXX test/cpp_headers/keyring.o 00:03:24.841 CXX test/cpp_headers/keyring_module.o 00:03:24.841 CXX test/cpp_headers/likely.o 00:03:24.841 CXX test/cpp_headers/log.o 00:03:24.841 CXX test/cpp_headers/lvol.o 00:03:25.109 CXX test/cpp_headers/md5.o 00:03:25.109 CXX test/cpp_headers/memory.o 00:03:25.109 CXX test/cpp_headers/nbd.o 00:03:25.109 CXX test/cpp_headers/mmio.o 00:03:25.109 CXX test/cpp_headers/net.o 00:03:25.109 CXX test/cpp_headers/notify.o 00:03:25.109 CXX test/cpp_headers/nvme.o 00:03:25.109 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:25.109 CXX test/cpp_headers/nvme_intel.o 00:03:25.109 CXX test/cpp_headers/nvme_ocssd.o 00:03:25.109 CXX test/cpp_headers/nvme_spec.o 00:03:25.109 CXX test/cpp_headers/nvme_zns.o 00:03:25.109 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:25.109 CXX test/cpp_headers/nvmf.o 00:03:25.109 CXX test/cpp_headers/nvmf_cmd.o 00:03:25.109 CXX test/cpp_headers/nvmf_spec.o 00:03:25.109 CXX test/cpp_headers/nvmf_transport.o 00:03:25.109 CC examples/ioat/verify/verify.o 00:03:25.109 CC examples/util/zipf/zipf.o 00:03:25.109 CXX test/cpp_headers/opal.o 00:03:25.109 CC test/thread/poller_perf/poller_perf.o 00:03:25.109 CC test/env/vtophys/vtophys.o 00:03:25.109 CC examples/ioat/perf/perf.o 00:03:25.109 CC test/env/pci/pci_ut.o 00:03:25.109 CC test/app/histogram_perf/histogram_perf.o 00:03:25.109 CC test/app/jsoncat/jsoncat.o 00:03:25.109 CC test/env/memory/memory_ut.o 00:03:25.109 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.109 CC test/app/bdev_svc/bdev_svc.o 00:03:25.109 CC test/app/stub/stub.o 00:03:25.109 CC app/fio/nvme/fio_plugin.o 00:03:25.109 CC test/dma/test_dma/test_dma.o 00:03:25.109 CC app/fio/bdev/fio_plugin.o 00:03:25.373 LINK spdk_nvme_discover 00:03:25.373 LINK spdk_lspci 00:03:25.373 LINK rpc_client_test 00:03:25.373 LINK interrupt_tgt 00:03:25.373 LINK spdk_trace_record 00:03:25.373 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.632 LINK jsoncat 00:03:25.632 LINK nvmf_tgt 00:03:25.632 LINK poller_perf 00:03:25.632 LINK histogram_perf 00:03:25.632 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.632 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.632 LINK zipf 00:03:25.632 CXX test/cpp_headers/opal_spec.o 00:03:25.632 CXX test/cpp_headers/pipe.o 00:03:25.632 CXX test/cpp_headers/pci_ids.o 00:03:25.632 CXX test/cpp_headers/queue.o 00:03:25.632 CXX test/cpp_headers/reduce.o 00:03:25.632 CXX test/cpp_headers/rpc.o 00:03:25.632 CXX test/cpp_headers/scheduler.o 00:03:25.632 LINK iscsi_tgt 00:03:25.632 CXX test/cpp_headers/scsi.o 00:03:25.632 CXX test/cpp_headers/scsi_spec.o 00:03:25.632 CXX test/cpp_headers/sock.o 00:03:25.632 CXX test/cpp_headers/stdinc.o 00:03:25.632 CXX test/cpp_headers/string.o 00:03:25.632 CXX test/cpp_headers/thread.o 00:03:25.632 CXX test/cpp_headers/trace.o 00:03:25.632 CXX test/cpp_headers/trace_parser.o 00:03:25.632 CXX test/cpp_headers/tree.o 00:03:25.632 CXX test/cpp_headers/ublk.o 00:03:25.632 CXX test/cpp_headers/util.o 00:03:25.632 LINK verify 00:03:25.632 CXX test/cpp_headers/uuid.o 00:03:25.632 CXX test/cpp_headers/version.o 00:03:25.632 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.632 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.632 CXX test/cpp_headers/vhost.o 00:03:25.632 LINK spdk_dd 00:03:25.632 CXX test/cpp_headers/vmd.o 00:03:25.632 CXX test/cpp_headers/xor.o 00:03:25.632 LINK vtophys 00:03:25.632 LINK ioat_perf 00:03:25.632 CXX test/cpp_headers/zipf.o 00:03:25.632 LINK spdk_tgt 00:03:25.632 LINK env_dpdk_post_init 00:03:25.632 LINK bdev_svc 00:03:25.632 LINK stub 00:03:25.632 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.632 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:25.890 LINK spdk_trace 00:03:25.890 LINK pci_ut 00:03:25.890 LINK nvme_fuzz 00:03:25.890 LINK spdk_bdev 00:03:26.149 LINK test_dma 00:03:26.149 CC test/event/reactor/reactor.o 00:03:26.149 CC examples/idxd/perf/perf.o 00:03:26.149 CC examples/sock/hello_world/hello_sock.o 00:03:26.149 CC examples/vmd/lsvmd/lsvmd.o 00:03:26.149 CC test/event/event_perf/event_perf.o 00:03:26.149 CC test/event/app_repeat/app_repeat.o 00:03:26.149 CC test/event/reactor_perf/reactor_perf.o 00:03:26.149 CC examples/vmd/led/led.o 00:03:26.149 CC test/event/scheduler/scheduler.o 00:03:26.149 LINK spdk_nvme 00:03:26.149 CC examples/thread/thread/thread_ex.o 00:03:26.149 LINK spdk_nvme_perf 00:03:26.149 LINK vhost_fuzz 00:03:26.149 LINK reactor 00:03:26.149 LINK spdk_nvme_identify 00:03:26.149 LINK lsvmd 00:03:26.149 LINK event_perf 00:03:26.149 LINK reactor_perf 00:03:26.149 LINK spdk_top 00:03:26.149 LINK led 00:03:26.149 LINK app_repeat 00:03:26.149 CC app/vhost/vhost.o 00:03:26.149 LINK hello_sock 00:03:26.407 LINK mem_callbacks 00:03:26.407 LINK scheduler 00:03:26.407 LINK idxd_perf 00:03:26.407 LINK thread 00:03:26.407 LINK vhost 00:03:26.407 CC test/nvme/e2edp/nvme_dp.o 00:03:26.407 CC test/nvme/fdp/fdp.o 00:03:26.407 CC test/nvme/startup/startup.o 00:03:26.407 CC test/nvme/reset/reset.o 00:03:26.407 CC test/nvme/aer/aer.o 00:03:26.407 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.407 CC test/nvme/compliance/nvme_compliance.o 00:03:26.407 CC test/nvme/boot_partition/boot_partition.o 00:03:26.407 CC test/nvme/simple_copy/simple_copy.o 00:03:26.407 CC test/nvme/err_injection/err_injection.o 00:03:26.407 CC test/nvme/sgl/sgl.o 00:03:26.407 CC test/nvme/reserve/reserve.o 00:03:26.407 CC test/nvme/overhead/overhead.o 00:03:26.407 CC test/nvme/cuse/cuse.o 00:03:26.407 CC test/nvme/fused_ordering/fused_ordering.o 00:03:26.407 CC test/nvme/connect_stress/connect_stress.o 00:03:26.666 CC test/blobfs/mkfs/mkfs.o 00:03:26.666 CC test/accel/dif/dif.o 00:03:26.666 CC test/lvol/esnap/esnap.o 00:03:26.666 LINK memory_ut 00:03:26.666 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.666 LINK startup 00:03:26.666 CC examples/nvme/hello_world/hello_world.o 00:03:26.666 CC examples/nvme/reconnect/reconnect.o 00:03:26.666 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.666 CC examples/nvme/abort/abort.o 00:03:26.666 LINK boot_partition 00:03:26.666 CC examples/nvme/arbitration/arbitration.o 00:03:26.666 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.666 CC examples/nvme/hotplug/hotplug.o 00:03:26.666 LINK fused_ordering 00:03:26.666 LINK err_injection 00:03:26.666 LINK connect_stress 00:03:26.666 LINK reserve 00:03:26.666 LINK doorbell_aers 00:03:26.926 LINK simple_copy 00:03:26.926 LINK mkfs 00:03:26.926 LINK sgl 00:03:26.926 LINK nvme_dp 00:03:26.926 LINK reset 00:03:26.926 LINK aer 00:03:26.926 LINK overhead 00:03:26.926 LINK fdp 00:03:26.926 CC examples/accel/perf/accel_perf.o 00:03:26.926 LINK nvme_compliance 00:03:26.926 CC examples/blob/cli/blobcli.o 00:03:26.926 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:26.926 LINK pmr_persistence 00:03:26.926 LINK cmb_copy 00:03:26.926 CC examples/blob/hello_world/hello_blob.o 00:03:26.926 LINK hello_world 00:03:26.926 LINK hotplug 00:03:26.926 LINK iscsi_fuzz 00:03:26.926 LINK arbitration 00:03:27.185 LINK reconnect 00:03:27.185 LINK abort 00:03:27.185 LINK hello_blob 00:03:27.185 LINK nvme_manage 00:03:27.185 LINK hello_fsdev 00:03:27.185 LINK dif 00:03:27.185 LINK accel_perf 00:03:27.444 LINK blobcli 00:03:27.703 LINK cuse 00:03:27.703 CC test/bdev/bdevio/bdevio.o 00:03:27.703 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.703 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.962 LINK bdevio 00:03:27.962 LINK hello_bdev 00:03:28.530 LINK bdevperf 00:03:28.788 CC examples/nvmf/nvmf/nvmf.o 00:03:29.047 LINK nvmf 00:03:30.427 LINK esnap 00:03:30.427 00:03:30.427 real 0m55.527s 00:03:30.427 user 8m15.839s 00:03:30.427 sys 3m36.192s 00:03:30.427 19:10:54 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:30.427 19:10:54 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.427 ************************************ 00:03:30.427 END TEST make 00:03:30.427 ************************************ 00:03:30.427 19:10:54 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.427 19:10:54 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.427 19:10:54 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.427 19:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.427 19:10:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.427 19:10:54 -- pm/common@44 -- $ pid=1826128 00:03:30.427 19:10:54 -- pm/common@50 -- $ kill -TERM 1826128 00:03:30.427 19:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.427 19:10:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.427 19:10:54 -- pm/common@44 -- $ pid=1826130 00:03:30.427 19:10:54 -- pm/common@50 -- $ kill -TERM 1826130 00:03:30.427 19:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.427 19:10:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:30.427 19:10:54 -- pm/common@44 -- $ pid=1826131 00:03:30.427 19:10:54 -- pm/common@50 -- $ kill -TERM 1826131 00:03:30.427 19:10:54 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.427 19:10:54 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:30.427 19:10:54 -- pm/common@44 -- $ pid=1826155 00:03:30.427 19:10:54 -- pm/common@50 -- $ sudo -E kill -TERM 1826155 00:03:30.687 19:10:54 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:30.687 19:10:54 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:30.687 19:10:54 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:30.687 19:10:54 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:30.687 19:10:54 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:30.687 19:10:54 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:30.687 19:10:54 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:30.687 19:10:54 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.687 19:10:54 -- scripts/common.sh@336 -- # read -ra ver1 00:03:30.687 19:10:54 -- scripts/common.sh@337 -- # IFS=.-: 00:03:30.687 19:10:54 -- scripts/common.sh@337 -- # read -ra ver2 00:03:30.687 19:10:54 -- scripts/common.sh@338 -- # local 'op=<' 00:03:30.687 19:10:54 -- scripts/common.sh@340 -- # ver1_l=2 00:03:30.687 19:10:54 -- scripts/common.sh@341 -- # ver2_l=1 00:03:30.687 19:10:54 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:30.687 19:10:54 -- scripts/common.sh@344 -- # case "$op" in 00:03:30.687 19:10:54 -- scripts/common.sh@345 -- # : 1 00:03:30.687 19:10:54 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:30.687 19:10:54 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.687 19:10:54 -- scripts/common.sh@365 -- # decimal 1 00:03:30.687 19:10:54 -- scripts/common.sh@353 -- # local d=1 00:03:30.687 19:10:54 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.687 19:10:54 -- scripts/common.sh@355 -- # echo 1 00:03:30.687 19:10:54 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:30.687 19:10:54 -- scripts/common.sh@366 -- # decimal 2 00:03:30.687 19:10:54 -- scripts/common.sh@353 -- # local d=2 00:03:30.687 19:10:54 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.687 19:10:54 -- scripts/common.sh@355 -- # echo 2 00:03:30.687 19:10:54 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:30.687 19:10:54 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:30.687 19:10:54 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:30.687 19:10:54 -- scripts/common.sh@368 -- # return 0 00:03:30.687 19:10:54 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.687 19:10:54 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.687 --rc genhtml_branch_coverage=1 00:03:30.687 --rc genhtml_function_coverage=1 00:03:30.687 --rc genhtml_legend=1 00:03:30.687 --rc geninfo_all_blocks=1 00:03:30.687 --rc geninfo_unexecuted_blocks=1 00:03:30.687 00:03:30.687 ' 00:03:30.687 19:10:54 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.687 --rc genhtml_branch_coverage=1 00:03:30.687 --rc genhtml_function_coverage=1 00:03:30.687 --rc genhtml_legend=1 00:03:30.687 --rc geninfo_all_blocks=1 00:03:30.687 --rc geninfo_unexecuted_blocks=1 00:03:30.687 00:03:30.687 ' 00:03:30.687 19:10:54 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.687 --rc genhtml_branch_coverage=1 00:03:30.687 --rc genhtml_function_coverage=1 00:03:30.687 --rc genhtml_legend=1 00:03:30.687 --rc geninfo_all_blocks=1 00:03:30.687 --rc geninfo_unexecuted_blocks=1 00:03:30.687 00:03:30.687 ' 00:03:30.687 19:10:54 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:30.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.687 --rc genhtml_branch_coverage=1 00:03:30.687 --rc genhtml_function_coverage=1 00:03:30.687 --rc genhtml_legend=1 00:03:30.687 --rc geninfo_all_blocks=1 00:03:30.687 --rc geninfo_unexecuted_blocks=1 00:03:30.687 00:03:30.687 ' 00:03:30.687 19:10:54 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:30.687 19:10:54 -- nvmf/common.sh@7 -- # uname -s 00:03:30.687 19:10:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.687 19:10:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.687 19:10:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.687 19:10:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.687 19:10:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.687 19:10:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.687 19:10:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.687 19:10:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.687 19:10:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.687 19:10:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.687 19:10:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:30.687 19:10:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:30.687 19:10:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.687 19:10:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.687 19:10:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:30.687 19:10:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.687 19:10:54 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:30.687 19:10:54 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:30.687 19:10:54 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.687 19:10:54 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.687 19:10:54 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.687 19:10:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.687 19:10:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.687 19:10:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.687 19:10:54 -- paths/export.sh@5 -- # export PATH 00:03:30.687 19:10:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.687 19:10:54 -- nvmf/common.sh@51 -- # : 0 00:03:30.687 19:10:54 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:30.687 19:10:54 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:30.688 19:10:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.688 19:10:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.688 19:10:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.688 19:10:54 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:30.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:30.688 19:10:54 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:30.688 19:10:54 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:30.688 19:10:54 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:30.688 19:10:54 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.688 19:10:54 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.688 19:10:54 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.688 19:10:54 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.688 19:10:54 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:30.688 19:10:54 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.688 19:10:54 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:30.688 19:10:54 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.688 19:10:54 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.688 19:10:54 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.688 19:10:54 -- spdk/autotest.sh@48 -- # udevadm_pid=1888347 00:03:30.688 19:10:54 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.688 19:10:54 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.688 19:10:54 -- pm/common@17 -- # local monitor 00:03:30.688 19:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.688 19:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.688 19:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.688 19:10:54 -- pm/common@21 -- # date +%s 00:03:30.688 19:10:54 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.688 19:10:54 -- pm/common@21 -- # date +%s 00:03:30.688 19:10:54 -- pm/common@25 -- # sleep 1 00:03:30.688 19:10:54 -- pm/common@21 -- # date +%s 00:03:30.688 19:10:54 -- pm/common@21 -- # date +%s 00:03:30.688 19:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729185054 00:03:30.688 19:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729185054 00:03:30.688 19:10:54 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729185054 00:03:30.688 19:10:54 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729185054 00:03:30.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729185054_collect-cpu-load.pm.log 00:03:30.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729185054_collect-vmstat.pm.log 00:03:30.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729185054_collect-cpu-temp.pm.log 00:03:30.688 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729185054_collect-bmc-pm.bmc.pm.log 00:03:31.627 19:10:55 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.627 19:10:55 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.627 19:10:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.887 19:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:31.887 19:10:55 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.887 19:10:55 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:31.887 19:10:55 -- common/autotest_common.sh@10 -- # set +x 00:03:31.887 19:10:55 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:31.887 19:10:55 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.887 19:10:55 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.887 19:10:55 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:31.887 19:10:55 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:31.887 19:10:55 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.887 19:10:55 -- common/autotest_common.sh@1455 -- # uname 00:03:31.887 19:10:55 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:31.887 19:10:55 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.887 19:10:55 -- common/autotest_common.sh@1475 -- # uname 00:03:31.887 19:10:55 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:31.887 19:10:55 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:31.887 19:10:55 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:31.887 lcov: LCOV version 1.15 00:03:31.887 19:10:55 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:49.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.978 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:56.548 19:11:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:56.548 19:11:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.548 19:11:20 -- common/autotest_common.sh@10 -- # set +x 00:03:56.548 19:11:20 -- spdk/autotest.sh@78 -- # rm -f 00:03:56.548 19:11:20 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.840 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:59.840 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:59.840 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:59.840 19:11:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:59.840 19:11:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:59.840 19:11:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:59.840 19:11:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:59.840 19:11:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:59.840 19:11:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:59.840 19:11:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:59.840 19:11:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.840 19:11:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:59.840 19:11:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:59.840 19:11:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:59.840 19:11:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:59.840 19:11:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:59.840 19:11:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:59.840 19:11:23 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:59.840 No valid GPT data, bailing 00:03:59.840 19:11:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.840 19:11:23 -- scripts/common.sh@394 -- # pt= 00:03:59.840 19:11:23 -- scripts/common.sh@395 -- # return 1 00:03:59.840 19:11:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:59.840 1+0 records in 00:03:59.840 1+0 records out 00:03:59.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427433 s, 245 MB/s 00:03:59.840 19:11:23 -- spdk/autotest.sh@105 -- # sync 00:03:59.840 19:11:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:59.840 19:11:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:59.840 19:11:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:05.115 19:11:28 -- spdk/autotest.sh@111 -- # uname -s 00:04:05.115 19:11:28 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:05.115 19:11:28 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:05.115 19:11:28 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:08.405 Hugepages 00:04:08.405 node hugesize free / total 00:04:08.405 node0 1048576kB 0 / 0 00:04:08.405 node0 2048kB 0 / 0 00:04:08.405 node1 1048576kB 0 / 0 00:04:08.405 node1 2048kB 0 / 0 00:04:08.405 00:04:08.405 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.405 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:08.405 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:08.405 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:08.405 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:08.405 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:08.405 19:11:31 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.405 19:11:31 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.405 19:11:31 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.405 19:11:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.948 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:10.948 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:10.948 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:10.948 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:10.948 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:10.948 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:11.207 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.584 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.584 19:11:36 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:13.962 19:11:37 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:13.962 19:11:37 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:13.962 19:11:37 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.962 19:11:37 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:13.962 19:11:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:13.962 19:11:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:13.962 19:11:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.962 19:11:37 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:13.962 19:11:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:13.962 19:11:37 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:13.962 19:11:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:13.962 19:11:37 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.498 Waiting for block devices as requested 00:04:16.498 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:16.757 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:16.757 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:16.757 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.017 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.017 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.017 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.275 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.275 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:17.276 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:17.534 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.534 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.534 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.534 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.793 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.793 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.793 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.053 19:11:41 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:18.053 19:11:41 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:04:18.053 19:11:41 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:18.053 19:11:41 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:18.053 19:11:41 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:18.053 19:11:41 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:18.053 19:11:41 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:04:18.053 19:11:41 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:18.053 19:11:41 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:18.053 19:11:41 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:18.053 19:11:41 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:18.053 19:11:41 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:18.053 19:11:41 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:18.053 19:11:41 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:18.053 19:11:41 -- common/autotest_common.sh@1541 -- # continue 00:04:18.053 19:11:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.053 19:11:41 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:18.053 19:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:18.053 19:11:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.053 19:11:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.053 19:11:41 -- common/autotest_common.sh@10 -- # set +x 00:04:18.053 19:11:41 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.344 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.344 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.345 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:22.723 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.723 19:11:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:22.723 19:11:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:22.723 19:11:46 -- common/autotest_common.sh@10 -- # set +x 00:04:22.723 19:11:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:22.723 19:11:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:22.723 19:11:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:22.723 19:11:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:22.723 19:11:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:22.723 19:11:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:22.723 19:11:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:22.723 19:11:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:22.723 19:11:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:22.723 19:11:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:22.723 19:11:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:22.723 19:11:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:22.723 19:11:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:22.723 19:11:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:22.723 19:11:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:04:22.723 19:11:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:22.723 19:11:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:22.723 19:11:46 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:04:22.723 19:11:46 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:22.723 19:11:46 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:04:22.723 19:11:46 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:04:22.723 19:11:46 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:22.723 19:11:46 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:22.723 19:11:46 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1902563 00:04:22.723 19:11:46 -- common/autotest_common.sh@1583 -- # waitforlisten 1902563 00:04:22.723 19:11:46 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.723 19:11:46 -- common/autotest_common.sh@831 -- # '[' -z 1902563 ']' 00:04:22.723 19:11:46 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.723 19:11:46 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.723 19:11:46 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.723 19:11:46 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.723 19:11:46 -- common/autotest_common.sh@10 -- # set +x 00:04:22.723 [2024-10-17 19:11:46.379199] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:22.723 [2024-10-17 19:11:46.379246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1902563 ] 00:04:22.723 [2024-10-17 19:11:46.454848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.723 [2024-10-17 19:11:46.498225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.982 19:11:46 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:22.982 19:11:46 -- common/autotest_common.sh@864 -- # return 0 00:04:22.982 19:11:46 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:22.982 19:11:46 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:22.982 19:11:46 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:26.274 nvme0n1 00:04:26.274 19:11:49 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:26.274 [2024-10-17 19:11:49.883488] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:26.274 request: 00:04:26.274 { 00:04:26.274 "nvme_ctrlr_name": "nvme0", 00:04:26.274 "password": "test", 00:04:26.274 "method": "bdev_nvme_opal_revert", 00:04:26.274 "req_id": 1 00:04:26.274 } 00:04:26.274 Got JSON-RPC error response 00:04:26.274 response: 00:04:26.274 { 00:04:26.274 "code": -32602, 00:04:26.274 "message": "Invalid parameters" 00:04:26.274 } 00:04:26.274 19:11:49 -- common/autotest_common.sh@1589 -- # true 00:04:26.274 19:11:49 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:26.274 19:11:49 -- common/autotest_common.sh@1593 -- # killprocess 1902563 00:04:26.274 19:11:49 -- common/autotest_common.sh@950 -- # '[' -z 1902563 ']' 00:04:26.274 19:11:49 -- common/autotest_common.sh@954 -- # kill -0 1902563 00:04:26.274 19:11:49 -- common/autotest_common.sh@955 -- # uname 00:04:26.274 19:11:49 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:26.274 19:11:49 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1902563 00:04:26.274 19:11:49 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:26.274 19:11:49 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:26.274 19:11:49 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1902563' 00:04:26.274 killing process with pid 1902563 00:04:26.274 19:11:49 -- common/autotest_common.sh@969 -- # kill 1902563 00:04:26.274 19:11:49 -- common/autotest_common.sh@974 -- # wait 1902563 00:04:28.813 19:11:52 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:28.813 19:11:52 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:28.813 19:11:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.813 19:11:52 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.813 19:11:52 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:28.813 19:11:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:28.813 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:28.813 19:11:52 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:28.813 19:11:52 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.813 19:11:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.813 19:11:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.813 19:11:52 -- common/autotest_common.sh@10 -- # set +x 00:04:28.813 ************************************ 00:04:28.813 START TEST env 00:04:28.813 ************************************ 00:04:28.813 19:11:52 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:28.813 * Looking for test storage... 00:04:28.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:28.813 19:11:52 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.813 19:11:52 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.813 19:11:52 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.813 19:11:52 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.813 19:11:52 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.813 19:11:52 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.813 19:11:52 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.813 19:11:52 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.813 19:11:52 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.813 19:11:52 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.813 19:11:52 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.813 19:11:52 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.813 19:11:52 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.813 19:11:52 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.813 19:11:52 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.813 19:11:52 env -- scripts/common.sh@344 -- # case "$op" in 00:04:28.813 19:11:52 env -- scripts/common.sh@345 -- # : 1 00:04:28.813 19:11:52 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.813 19:11:52 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.813 19:11:52 env -- scripts/common.sh@365 -- # decimal 1 00:04:28.813 19:11:52 env -- scripts/common.sh@353 -- # local d=1 00:04:28.814 19:11:52 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.814 19:11:52 env -- scripts/common.sh@355 -- # echo 1 00:04:28.814 19:11:52 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.814 19:11:52 env -- scripts/common.sh@366 -- # decimal 2 00:04:28.814 19:11:52 env -- scripts/common.sh@353 -- # local d=2 00:04:28.814 19:11:52 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.814 19:11:52 env -- scripts/common.sh@355 -- # echo 2 00:04:28.814 19:11:52 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.814 19:11:52 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.814 19:11:52 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.814 19:11:52 env -- scripts/common.sh@368 -- # return 0 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.814 --rc genhtml_branch_coverage=1 00:04:28.814 --rc genhtml_function_coverage=1 00:04:28.814 --rc genhtml_legend=1 00:04:28.814 --rc geninfo_all_blocks=1 00:04:28.814 --rc geninfo_unexecuted_blocks=1 00:04:28.814 00:04:28.814 ' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.814 --rc genhtml_branch_coverage=1 00:04:28.814 --rc genhtml_function_coverage=1 00:04:28.814 --rc genhtml_legend=1 00:04:28.814 --rc geninfo_all_blocks=1 00:04:28.814 --rc geninfo_unexecuted_blocks=1 00:04:28.814 00:04:28.814 ' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.814 --rc genhtml_branch_coverage=1 00:04:28.814 --rc genhtml_function_coverage=1 00:04:28.814 --rc genhtml_legend=1 00:04:28.814 --rc geninfo_all_blocks=1 00:04:28.814 --rc geninfo_unexecuted_blocks=1 00:04:28.814 00:04:28.814 ' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.814 --rc genhtml_branch_coverage=1 00:04:28.814 --rc genhtml_function_coverage=1 00:04:28.814 --rc genhtml_legend=1 00:04:28.814 --rc geninfo_all_blocks=1 00:04:28.814 --rc geninfo_unexecuted_blocks=1 00:04:28.814 00:04:28.814 ' 00:04:28.814 19:11:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.814 19:11:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.814 ************************************ 00:04:28.814 START TEST env_memory 00:04:28.814 ************************************ 00:04:28.814 19:11:52 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:28.814 00:04:28.814 00:04:28.814 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.814 http://cunit.sourceforge.net/ 00:04:28.814 00:04:28.814 00:04:28.814 Suite: memory 00:04:28.814 Test: alloc and free memory map ...[2024-10-17 19:11:52.380177] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.814 passed 00:04:28.814 Test: mem map translation ...[2024-10-17 19:11:52.397847] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.814 [2024-10-17 19:11:52.397862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.814 [2024-10-17 19:11:52.397896] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.814 [2024-10-17 19:11:52.397902] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.814 passed 00:04:28.814 Test: mem map registration ...[2024-10-17 19:11:52.433449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:28.814 [2024-10-17 19:11:52.433463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.814 passed 00:04:28.814 Test: mem map adjacent registrations ...passed 00:04:28.814 00:04:28.814 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.814 suites 1 1 n/a 0 0 00:04:28.814 tests 4 4 4 0 0 00:04:28.814 asserts 152 152 152 0 n/a 00:04:28.814 00:04:28.814 Elapsed time = 0.132 seconds 00:04:28.814 00:04:28.814 real 0m0.145s 00:04:28.814 user 0m0.135s 00:04:28.814 sys 0m0.010s 00:04:28.814 19:11:52 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:28.814 19:11:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.814 ************************************ 00:04:28.814 END TEST env_memory 00:04:28.814 ************************************ 00:04:28.814 19:11:52 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:28.814 19:11:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:28.814 19:11:52 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.814 ************************************ 00:04:28.814 START TEST env_vtophys 00:04:28.814 ************************************ 00:04:28.814 19:11:52 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:28.814 EAL: lib.eal log level changed from notice to debug 00:04:28.814 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.814 EAL: Detected lcore 1 as core 1 on socket 0 00:04:28.814 EAL: Detected lcore 2 as core 2 on socket 0 00:04:28.814 EAL: Detected lcore 3 as core 3 on socket 0 00:04:28.814 EAL: Detected lcore 4 as core 4 on socket 0 00:04:28.814 EAL: Detected lcore 5 as core 5 on socket 0 00:04:28.814 EAL: Detected lcore 6 as core 6 on socket 0 00:04:28.814 EAL: Detected lcore 7 as core 8 on socket 0 00:04:28.814 EAL: Detected lcore 8 as core 9 on socket 0 00:04:28.814 EAL: Detected lcore 9 as core 10 on socket 0 00:04:28.814 EAL: Detected lcore 10 as core 11 on socket 0 00:04:28.814 EAL: Detected lcore 11 as core 12 on socket 0 00:04:28.814 EAL: Detected lcore 12 as core 13 on socket 0 00:04:28.814 EAL: Detected lcore 13 as core 16 on socket 0 00:04:28.814 EAL: Detected lcore 14 as core 17 on socket 0 00:04:28.814 EAL: Detected lcore 15 as core 18 on socket 0 00:04:28.814 EAL: Detected lcore 16 as core 19 on socket 0 00:04:28.814 EAL: Detected lcore 17 as core 20 on socket 0 00:04:28.814 EAL: Detected lcore 18 as core 21 on socket 0 00:04:28.814 EAL: Detected lcore 19 as core 25 on socket 0 00:04:28.814 EAL: Detected lcore 20 as core 26 on socket 0 00:04:28.814 EAL: Detected lcore 21 as core 27 on socket 0 00:04:28.814 EAL: Detected lcore 22 as core 28 on socket 0 00:04:28.814 EAL: Detected lcore 23 as core 29 on socket 0 00:04:28.814 EAL: Detected lcore 24 as core 0 on socket 1 00:04:28.814 EAL: Detected lcore 25 as core 1 on socket 1 00:04:28.814 EAL: Detected lcore 26 as core 2 on socket 1 00:04:28.814 EAL: Detected lcore 27 as core 3 on socket 1 00:04:28.814 EAL: Detected lcore 28 as core 4 on socket 1 00:04:28.814 EAL: Detected lcore 29 as core 5 on socket 1 00:04:28.814 EAL: Detected lcore 30 as core 6 on socket 1 00:04:28.814 EAL: Detected lcore 31 as core 8 on socket 1 00:04:28.814 EAL: Detected lcore 32 as core 10 on socket 1 00:04:28.814 EAL: Detected lcore 33 as core 11 on socket 1 00:04:28.814 EAL: Detected lcore 34 as core 12 on socket 1 00:04:28.814 EAL: Detected lcore 35 as core 13 on socket 1 00:04:28.814 EAL: Detected lcore 36 as core 16 on socket 1 00:04:28.814 EAL: Detected lcore 37 as core 17 on socket 1 00:04:28.814 EAL: Detected lcore 38 as core 18 on socket 1 00:04:28.814 EAL: Detected lcore 39 as core 19 on socket 1 00:04:28.814 EAL: Detected lcore 40 as core 20 on socket 1 00:04:28.814 EAL: Detected lcore 41 as core 21 on socket 1 00:04:28.814 EAL: Detected lcore 42 as core 24 on socket 1 00:04:28.814 EAL: Detected lcore 43 as core 25 on socket 1 00:04:28.814 EAL: Detected lcore 44 as core 26 on socket 1 00:04:28.814 EAL: Detected lcore 45 as core 27 on socket 1 00:04:28.814 EAL: Detected lcore 46 as core 28 on socket 1 00:04:28.814 EAL: Detected lcore 47 as core 29 on socket 1 00:04:28.814 EAL: Detected lcore 48 as core 0 on socket 0 00:04:28.814 EAL: Detected lcore 49 as core 1 on socket 0 00:04:28.814 EAL: Detected lcore 50 as core 2 on socket 0 00:04:28.814 EAL: Detected lcore 51 as core 3 on socket 0 00:04:28.814 EAL: Detected lcore 52 as core 4 on socket 0 00:04:28.814 EAL: Detected lcore 53 as core 5 on socket 0 00:04:28.814 EAL: Detected lcore 54 as core 6 on socket 0 00:04:28.814 EAL: Detected lcore 55 as core 8 on socket 0 00:04:28.814 EAL: Detected lcore 56 as core 9 on socket 0 00:04:28.814 EAL: Detected lcore 57 as core 10 on socket 0 00:04:28.814 EAL: Detected lcore 58 as core 11 on socket 0 00:04:28.814 EAL: Detected lcore 59 as core 12 on socket 0 00:04:28.814 EAL: Detected lcore 60 as core 13 on socket 0 00:04:28.815 EAL: Detected lcore 61 as core 16 on socket 0 00:04:28.815 EAL: Detected lcore 62 as core 17 on socket 0 00:04:28.815 EAL: Detected lcore 63 as core 18 on socket 0 00:04:28.815 EAL: Detected lcore 64 as core 19 on socket 0 00:04:28.815 EAL: Detected lcore 65 as core 20 on socket 0 00:04:28.815 EAL: Detected lcore 66 as core 21 on socket 0 00:04:28.815 EAL: Detected lcore 67 as core 25 on socket 0 00:04:28.815 EAL: Detected lcore 68 as core 26 on socket 0 00:04:28.815 EAL: Detected lcore 69 as core 27 on socket 0 00:04:28.815 EAL: Detected lcore 70 as core 28 on socket 0 00:04:28.815 EAL: Detected lcore 71 as core 29 on socket 0 00:04:28.815 EAL: Detected lcore 72 as core 0 on socket 1 00:04:28.815 EAL: Detected lcore 73 as core 1 on socket 1 00:04:28.815 EAL: Detected lcore 74 as core 2 on socket 1 00:04:28.815 EAL: Detected lcore 75 as core 3 on socket 1 00:04:28.815 EAL: Detected lcore 76 as core 4 on socket 1 00:04:28.815 EAL: Detected lcore 77 as core 5 on socket 1 00:04:28.815 EAL: Detected lcore 78 as core 6 on socket 1 00:04:28.815 EAL: Detected lcore 79 as core 8 on socket 1 00:04:28.815 EAL: Detected lcore 80 as core 10 on socket 1 00:04:28.815 EAL: Detected lcore 81 as core 11 on socket 1 00:04:28.815 EAL: Detected lcore 82 as core 12 on socket 1 00:04:28.815 EAL: Detected lcore 83 as core 13 on socket 1 00:04:28.815 EAL: Detected lcore 84 as core 16 on socket 1 00:04:28.815 EAL: Detected lcore 85 as core 17 on socket 1 00:04:28.815 EAL: Detected lcore 86 as core 18 on socket 1 00:04:28.815 EAL: Detected lcore 87 as core 19 on socket 1 00:04:28.815 EAL: Detected lcore 88 as core 20 on socket 1 00:04:28.815 EAL: Detected lcore 89 as core 21 on socket 1 00:04:28.815 EAL: Detected lcore 90 as core 24 on socket 1 00:04:28.815 EAL: Detected lcore 91 as core 25 on socket 1 00:04:28.815 EAL: Detected lcore 92 as core 26 on socket 1 00:04:28.815 EAL: Detected lcore 93 as core 27 on socket 1 00:04:28.815 EAL: Detected lcore 94 as core 28 on socket 1 00:04:28.815 EAL: Detected lcore 95 as core 29 on socket 1 00:04:28.815 EAL: Maximum logical cores by configuration: 128 00:04:28.815 EAL: Detected CPU lcores: 96 00:04:28.815 EAL: Detected NUMA nodes: 2 00:04:28.815 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.815 EAL: Detected shared linkage of DPDK 00:04:28.815 EAL: No shared files mode enabled, IPC will be disabled 00:04:29.075 EAL: Bus pci wants IOVA as 'DC' 00:04:29.075 EAL: Buses did not request a specific IOVA mode. 00:04:29.075 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:29.075 EAL: Selected IOVA mode 'VA' 00:04:29.075 EAL: Probing VFIO support... 00:04:29.075 EAL: IOMMU type 1 (Type 1) is supported 00:04:29.075 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:29.075 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:29.075 EAL: VFIO support initialized 00:04:29.075 EAL: Ask a virtual area of 0x2e000 bytes 00:04:29.075 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:29.075 EAL: Setting up physically contiguous memory... 00:04:29.075 EAL: Setting maximum number of open files to 524288 00:04:29.075 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:29.075 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:29.075 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:29.075 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.075 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:29.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.075 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.075 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:29.075 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:29.075 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.075 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:29.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.075 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.075 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:29.075 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:29.075 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.075 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:29.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.075 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.075 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:29.075 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:29.075 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.075 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:29.075 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:29.075 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.075 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:29.075 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:29.075 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:29.075 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.075 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:29.076 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.076 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.076 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:29.076 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:29.076 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.076 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:29.076 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.076 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.076 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:29.076 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:29.076 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.076 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:29.076 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.076 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.076 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:29.076 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:29.076 EAL: Ask a virtual area of 0x61000 bytes 00:04:29.076 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:29.076 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:29.076 EAL: Ask a virtual area of 0x400000000 bytes 00:04:29.076 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:29.076 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:29.076 EAL: Hugepages will be freed exactly as allocated. 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: TSC frequency is ~2100000 KHz 00:04:29.076 EAL: Main lcore 0 is ready (tid=7f7e01d4ba00;cpuset=[0]) 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 0 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 2MB 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:29.076 EAL: Mem event callback 'spdk:(nil)' registered 00:04:29.076 00:04:29.076 00:04:29.076 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.076 http://cunit.sourceforge.net/ 00:04:29.076 00:04:29.076 00:04:29.076 Suite: components_suite 00:04:29.076 Test: vtophys_malloc_test ...passed 00:04:29.076 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.076 EAL: Trying to obtain current memory policy. 00:04:29.076 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.076 EAL: Restoring previous memory policy: 4 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.076 EAL: request: mp_malloc_sync 00:04:29.076 EAL: No shared files mode enabled, IPC is disabled 00:04:29.076 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.336 EAL: request: mp_malloc_sync 00:04:29.336 EAL: No shared files mode enabled, IPC is disabled 00:04:29.336 EAL: Heap on socket 0 was shrunk by 258MB 00:04:29.336 EAL: Trying to obtain current memory policy. 00:04:29.336 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.336 EAL: Restoring previous memory policy: 4 00:04:29.336 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.336 EAL: request: mp_malloc_sync 00:04:29.336 EAL: No shared files mode enabled, IPC is disabled 00:04:29.336 EAL: Heap on socket 0 was expanded by 514MB 00:04:29.336 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.595 EAL: request: mp_malloc_sync 00:04:29.595 EAL: No shared files mode enabled, IPC is disabled 00:04:29.595 EAL: Heap on socket 0 was shrunk by 514MB 00:04:29.595 EAL: Trying to obtain current memory policy. 00:04:29.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.595 EAL: Restoring previous memory policy: 4 00:04:29.595 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.595 EAL: request: mp_malloc_sync 00:04:29.595 EAL: No shared files mode enabled, IPC is disabled 00:04:29.595 EAL: Heap on socket 0 was expanded by 1026MB 00:04:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.114 EAL: request: mp_malloc_sync 00:04:30.114 EAL: No shared files mode enabled, IPC is disabled 00:04:30.114 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:30.114 passed 00:04:30.114 00:04:30.115 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.115 suites 1 1 n/a 0 0 00:04:30.115 tests 2 2 2 0 0 00:04:30.115 asserts 497 497 497 0 n/a 00:04:30.115 00:04:30.115 Elapsed time = 0.972 seconds 00:04:30.115 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.115 EAL: request: mp_malloc_sync 00:04:30.115 EAL: No shared files mode enabled, IPC is disabled 00:04:30.115 EAL: Heap on socket 0 was shrunk by 2MB 00:04:30.115 EAL: No shared files mode enabled, IPC is disabled 00:04:30.115 EAL: No shared files mode enabled, IPC is disabled 00:04:30.115 EAL: No shared files mode enabled, IPC is disabled 00:04:30.115 00:04:30.115 real 0m1.106s 00:04:30.115 user 0m0.643s 00:04:30.115 sys 0m0.430s 00:04:30.115 19:11:53 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.115 19:11:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:30.115 ************************************ 00:04:30.115 END TEST env_vtophys 00:04:30.115 ************************************ 00:04:30.115 19:11:53 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.115 19:11:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:30.115 19:11:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.115 19:11:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.115 ************************************ 00:04:30.115 START TEST env_pci 00:04:30.115 ************************************ 00:04:30.115 19:11:53 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:30.115 00:04:30.115 00:04:30.115 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.115 http://cunit.sourceforge.net/ 00:04:30.115 00:04:30.115 00:04:30.115 Suite: pci 00:04:30.115 Test: pci_hook ...[2024-10-17 19:11:53.747016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1903884 has claimed it 00:04:30.115 EAL: Cannot find device (10000:00:01.0) 00:04:30.115 EAL: Failed to attach device on primary process 00:04:30.115 passed 00:04:30.115 00:04:30.115 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.115 suites 1 1 n/a 0 0 00:04:30.115 tests 1 1 1 0 0 00:04:30.115 asserts 25 25 25 0 n/a 00:04:30.115 00:04:30.115 Elapsed time = 0.029 seconds 00:04:30.115 00:04:30.115 real 0m0.049s 00:04:30.115 user 0m0.019s 00:04:30.115 sys 0m0.030s 00:04:30.115 19:11:53 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:30.115 19:11:53 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:30.115 ************************************ 00:04:30.115 END TEST env_pci 00:04:30.115 ************************************ 00:04:30.115 19:11:53 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:30.115 19:11:53 env -- env/env.sh@15 -- # uname 00:04:30.115 19:11:53 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:30.115 19:11:53 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:30.115 19:11:53 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.115 19:11:53 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:30.115 19:11:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:30.115 19:11:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.115 ************************************ 00:04:30.115 START TEST env_dpdk_post_init 00:04:30.115 ************************************ 00:04:30.115 19:11:53 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:30.115 EAL: Detected CPU lcores: 96 00:04:30.115 EAL: Detected NUMA nodes: 2 00:04:30.115 EAL: Detected shared linkage of DPDK 00:04:30.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.375 EAL: Selected IOVA mode 'VA' 00:04:30.375 EAL: VFIO support initialized 00:04:30.375 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.375 EAL: Using IOMMU type 1 (Type 1) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:30.375 EAL: Ignore mapping IO port bar(1) 00:04:30.375 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:31.313 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:31.313 EAL: Ignore mapping IO port bar(1) 00:04:31.313 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:34.607 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:34.607 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:35.176 Starting DPDK initialization... 00:04:35.176 Starting SPDK post initialization... 00:04:35.176 SPDK NVMe probe 00:04:35.177 Attaching to 0000:5e:00.0 00:04:35.177 Attached to 0000:5e:00.0 00:04:35.177 Cleaning up... 00:04:35.177 00:04:35.177 real 0m4.839s 00:04:35.177 user 0m3.422s 00:04:35.177 sys 0m0.484s 00:04:35.177 19:11:58 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.177 19:11:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.177 ************************************ 00:04:35.177 END TEST env_dpdk_post_init 00:04:35.177 ************************************ 00:04:35.177 19:11:58 env -- env/env.sh@26 -- # uname 00:04:35.177 19:11:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:35.177 19:11:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.177 19:11:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.177 19:11:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.177 19:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.177 ************************************ 00:04:35.177 START TEST env_mem_callbacks 00:04:35.177 ************************************ 00:04:35.177 19:11:58 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:35.177 EAL: Detected CPU lcores: 96 00:04:35.177 EAL: Detected NUMA nodes: 2 00:04:35.177 EAL: Detected shared linkage of DPDK 00:04:35.177 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.177 EAL: Selected IOVA mode 'VA' 00:04:35.177 EAL: VFIO support initialized 00:04:35.177 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.177 00:04:35.177 00:04:35.177 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.177 http://cunit.sourceforge.net/ 00:04:35.177 00:04:35.177 00:04:35.177 Suite: memory 00:04:35.177 Test: test ... 00:04:35.177 register 0x200000200000 2097152 00:04:35.177 malloc 3145728 00:04:35.177 register 0x200000400000 4194304 00:04:35.177 buf 0x200000500000 len 3145728 PASSED 00:04:35.177 malloc 64 00:04:35.177 buf 0x2000004fff40 len 64 PASSED 00:04:35.177 malloc 4194304 00:04:35.177 register 0x200000800000 6291456 00:04:35.177 buf 0x200000a00000 len 4194304 PASSED 00:04:35.177 free 0x200000500000 3145728 00:04:35.177 free 0x2000004fff40 64 00:04:35.177 unregister 0x200000400000 4194304 PASSED 00:04:35.177 free 0x200000a00000 4194304 00:04:35.177 unregister 0x200000800000 6291456 PASSED 00:04:35.177 malloc 8388608 00:04:35.177 register 0x200000400000 10485760 00:04:35.177 buf 0x200000600000 len 8388608 PASSED 00:04:35.177 free 0x200000600000 8388608 00:04:35.177 unregister 0x200000400000 10485760 PASSED 00:04:35.177 passed 00:04:35.177 00:04:35.177 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.177 suites 1 1 n/a 0 0 00:04:35.177 tests 1 1 1 0 0 00:04:35.177 asserts 15 15 15 0 n/a 00:04:35.177 00:04:35.177 Elapsed time = 0.008 seconds 00:04:35.177 00:04:35.177 real 0m0.063s 00:04:35.177 user 0m0.017s 00:04:35.177 sys 0m0.045s 00:04:35.177 19:11:58 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.177 19:11:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:35.177 ************************************ 00:04:35.177 END TEST env_mem_callbacks 00:04:35.177 ************************************ 00:04:35.177 00:04:35.177 real 0m6.735s 00:04:35.177 user 0m4.469s 00:04:35.177 sys 0m1.337s 00:04:35.177 19:11:58 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.177 19:11:58 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.177 ************************************ 00:04:35.177 END TEST env 00:04:35.177 ************************************ 00:04:35.177 19:11:58 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.177 19:11:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.177 19:11:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.177 19:11:58 -- common/autotest_common.sh@10 -- # set +x 00:04:35.177 ************************************ 00:04:35.177 START TEST rpc 00:04:35.177 ************************************ 00:04:35.177 19:11:58 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:35.437 * Looking for test storage... 00:04:35.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.437 19:11:59 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.437 19:11:59 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.437 19:11:59 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.437 19:11:59 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.437 19:11:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.437 19:11:59 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.437 19:11:59 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.437 19:11:59 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.437 19:11:59 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.437 19:11:59 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:35.437 19:11:59 rpc -- scripts/common.sh@345 -- # : 1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.437 19:11:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.437 19:11:59 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@353 -- # local d=1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.437 19:11:59 rpc -- scripts/common.sh@355 -- # echo 1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.437 19:11:59 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@353 -- # local d=2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.437 19:11:59 rpc -- scripts/common.sh@355 -- # echo 2 00:04:35.437 19:11:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.438 19:11:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.438 19:11:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.438 19:11:59 rpc -- scripts/common.sh@368 -- # return 0 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.438 --rc genhtml_branch_coverage=1 00:04:35.438 --rc genhtml_function_coverage=1 00:04:35.438 --rc genhtml_legend=1 00:04:35.438 --rc geninfo_all_blocks=1 00:04:35.438 --rc geninfo_unexecuted_blocks=1 00:04:35.438 00:04:35.438 ' 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.438 --rc genhtml_branch_coverage=1 00:04:35.438 --rc genhtml_function_coverage=1 00:04:35.438 --rc genhtml_legend=1 00:04:35.438 --rc geninfo_all_blocks=1 00:04:35.438 --rc geninfo_unexecuted_blocks=1 00:04:35.438 00:04:35.438 ' 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.438 --rc genhtml_branch_coverage=1 00:04:35.438 --rc genhtml_function_coverage=1 00:04:35.438 --rc genhtml_legend=1 00:04:35.438 --rc geninfo_all_blocks=1 00:04:35.438 --rc geninfo_unexecuted_blocks=1 00:04:35.438 00:04:35.438 ' 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.438 --rc genhtml_branch_coverage=1 00:04:35.438 --rc genhtml_function_coverage=1 00:04:35.438 --rc genhtml_legend=1 00:04:35.438 --rc geninfo_all_blocks=1 00:04:35.438 --rc geninfo_unexecuted_blocks=1 00:04:35.438 00:04:35.438 ' 00:04:35.438 19:11:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1904931 00:04:35.438 19:11:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:35.438 19:11:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.438 19:11:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1904931 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 1904931 ']' 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:35.438 19:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.438 [2024-10-17 19:11:59.174385] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:35.438 [2024-10-17 19:11:59.174433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1904931 ] 00:04:35.698 [2024-10-17 19:11:59.250531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.698 [2024-10-17 19:11:59.291804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:35.698 [2024-10-17 19:11:59.291838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1904931' to capture a snapshot of events at runtime. 00:04:35.698 [2024-10-17 19:11:59.291845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:35.698 [2024-10-17 19:11:59.291851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:35.698 [2024-10-17 19:11:59.291856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1904931 for offline analysis/debug. 00:04:35.698 [2024-10-17 19:11:59.292430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.958 19:11:59 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:35.958 19:11:59 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:35.958 19:11:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.958 19:11:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:35.958 19:11:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.958 19:11:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.958 19:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.958 19:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.958 19:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 ************************************ 00:04:35.958 START TEST rpc_integrity 00:04:35.958 ************************************ 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.958 { 00:04:35.958 "name": "Malloc0", 00:04:35.958 "aliases": [ 00:04:35.958 "a510cdc0-9718-4d96-9f62-4d2804b22ae2" 00:04:35.958 ], 00:04:35.958 "product_name": "Malloc disk", 00:04:35.958 "block_size": 512, 00:04:35.958 "num_blocks": 16384, 00:04:35.958 "uuid": "a510cdc0-9718-4d96-9f62-4d2804b22ae2", 00:04:35.958 "assigned_rate_limits": { 00:04:35.958 "rw_ios_per_sec": 0, 00:04:35.958 "rw_mbytes_per_sec": 0, 00:04:35.958 "r_mbytes_per_sec": 0, 00:04:35.958 "w_mbytes_per_sec": 0 00:04:35.958 }, 00:04:35.958 "claimed": false, 00:04:35.958 "zoned": false, 00:04:35.958 "supported_io_types": { 00:04:35.958 "read": true, 00:04:35.958 "write": true, 00:04:35.958 "unmap": true, 00:04:35.958 "flush": true, 00:04:35.958 "reset": true, 00:04:35.958 "nvme_admin": false, 00:04:35.958 "nvme_io": false, 00:04:35.958 "nvme_io_md": false, 00:04:35.958 "write_zeroes": true, 00:04:35.958 "zcopy": true, 00:04:35.958 "get_zone_info": false, 00:04:35.958 "zone_management": false, 00:04:35.958 "zone_append": false, 00:04:35.958 "compare": false, 00:04:35.958 "compare_and_write": false, 00:04:35.958 "abort": true, 00:04:35.958 "seek_hole": false, 00:04:35.958 "seek_data": false, 00:04:35.958 "copy": true, 00:04:35.958 "nvme_iov_md": false 00:04:35.958 }, 00:04:35.958 "memory_domains": [ 00:04:35.958 { 00:04:35.958 "dma_device_id": "system", 00:04:35.958 "dma_device_type": 1 00:04:35.958 }, 00:04:35.958 { 00:04:35.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.958 "dma_device_type": 2 00:04:35.958 } 00:04:35.958 ], 00:04:35.958 "driver_specific": {} 00:04:35.958 } 00:04:35.958 ]' 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 [2024-10-17 19:11:59.651286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.958 [2024-10-17 19:11:59.651313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.958 [2024-10-17 19:11:59.651324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f6790 00:04:35.958 [2024-10-17 19:11:59.651330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.958 [2024-10-17 19:11:59.652397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.958 [2024-10-17 19:11:59.652416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.958 Passthru0 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.958 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.958 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.958 { 00:04:35.958 "name": "Malloc0", 00:04:35.958 "aliases": [ 00:04:35.958 "a510cdc0-9718-4d96-9f62-4d2804b22ae2" 00:04:35.958 ], 00:04:35.958 "product_name": "Malloc disk", 00:04:35.958 "block_size": 512, 00:04:35.958 "num_blocks": 16384, 00:04:35.958 "uuid": "a510cdc0-9718-4d96-9f62-4d2804b22ae2", 00:04:35.958 "assigned_rate_limits": { 00:04:35.958 "rw_ios_per_sec": 0, 00:04:35.958 "rw_mbytes_per_sec": 0, 00:04:35.958 "r_mbytes_per_sec": 0, 00:04:35.959 "w_mbytes_per_sec": 0 00:04:35.959 }, 00:04:35.959 "claimed": true, 00:04:35.959 "claim_type": "exclusive_write", 00:04:35.959 "zoned": false, 00:04:35.959 "supported_io_types": { 00:04:35.959 "read": true, 00:04:35.959 "write": true, 00:04:35.959 "unmap": true, 00:04:35.959 "flush": true, 00:04:35.959 "reset": true, 00:04:35.959 "nvme_admin": false, 00:04:35.959 "nvme_io": false, 00:04:35.959 "nvme_io_md": false, 00:04:35.959 "write_zeroes": true, 00:04:35.959 "zcopy": true, 00:04:35.959 "get_zone_info": false, 00:04:35.959 "zone_management": false, 00:04:35.959 "zone_append": false, 00:04:35.959 "compare": false, 00:04:35.959 "compare_and_write": false, 00:04:35.959 "abort": true, 00:04:35.959 "seek_hole": false, 00:04:35.959 "seek_data": false, 00:04:35.959 "copy": true, 00:04:35.959 "nvme_iov_md": false 00:04:35.959 }, 00:04:35.959 "memory_domains": [ 00:04:35.959 { 00:04:35.959 "dma_device_id": "system", 00:04:35.959 "dma_device_type": 1 00:04:35.959 }, 00:04:35.959 { 00:04:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.959 "dma_device_type": 2 00:04:35.959 } 00:04:35.959 ], 00:04:35.959 "driver_specific": {} 00:04:35.959 }, 00:04:35.959 { 00:04:35.959 "name": "Passthru0", 00:04:35.959 "aliases": [ 00:04:35.959 "7b49bd1e-ca6e-5ab2-94f8-961869664910" 00:04:35.959 ], 00:04:35.959 "product_name": "passthru", 00:04:35.959 "block_size": 512, 00:04:35.959 "num_blocks": 16384, 00:04:35.959 "uuid": "7b49bd1e-ca6e-5ab2-94f8-961869664910", 00:04:35.959 "assigned_rate_limits": { 00:04:35.959 "rw_ios_per_sec": 0, 00:04:35.959 "rw_mbytes_per_sec": 0, 00:04:35.959 "r_mbytes_per_sec": 0, 00:04:35.959 "w_mbytes_per_sec": 0 00:04:35.959 }, 00:04:35.959 "claimed": false, 00:04:35.959 "zoned": false, 00:04:35.959 "supported_io_types": { 00:04:35.959 "read": true, 00:04:35.959 "write": true, 00:04:35.959 "unmap": true, 00:04:35.959 "flush": true, 00:04:35.959 "reset": true, 00:04:35.959 "nvme_admin": false, 00:04:35.959 "nvme_io": false, 00:04:35.959 "nvme_io_md": false, 00:04:35.959 "write_zeroes": true, 00:04:35.959 "zcopy": true, 00:04:35.959 "get_zone_info": false, 00:04:35.959 "zone_management": false, 00:04:35.959 "zone_append": false, 00:04:35.959 "compare": false, 00:04:35.959 "compare_and_write": false, 00:04:35.959 "abort": true, 00:04:35.959 "seek_hole": false, 00:04:35.959 "seek_data": false, 00:04:35.959 "copy": true, 00:04:35.959 "nvme_iov_md": false 00:04:35.959 }, 00:04:35.959 "memory_domains": [ 00:04:35.959 { 00:04:35.959 "dma_device_id": "system", 00:04:35.959 "dma_device_type": 1 00:04:35.959 }, 00:04:35.959 { 00:04:35.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.959 "dma_device_type": 2 00:04:35.959 } 00:04:35.959 ], 00:04:35.959 "driver_specific": { 00:04:35.959 "passthru": { 00:04:35.959 "name": "Passthru0", 00:04:35.959 "base_bdev_name": "Malloc0" 00:04:35.959 } 00:04:35.959 } 00:04:35.959 } 00:04:35.959 ]' 00:04:35.959 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.959 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.959 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.959 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.959 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.959 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.959 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.959 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.959 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.218 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.218 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.218 19:11:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.218 00:04:36.218 real 0m0.271s 00:04:36.218 user 0m0.175s 00:04:36.218 sys 0m0.035s 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.218 19:11:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 ************************************ 00:04:36.218 END TEST rpc_integrity 00:04:36.218 ************************************ 00:04:36.218 19:11:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:36.218 19:11:59 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.218 19:11:59 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.218 19:11:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.218 ************************************ 00:04:36.218 START TEST rpc_plugins 00:04:36.218 ************************************ 00:04:36.218 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:36.218 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:36.218 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:36.219 { 00:04:36.219 "name": "Malloc1", 00:04:36.219 "aliases": [ 00:04:36.219 "0b419657-de55-4964-98dc-732118428f55" 00:04:36.219 ], 00:04:36.219 "product_name": "Malloc disk", 00:04:36.219 "block_size": 4096, 00:04:36.219 "num_blocks": 256, 00:04:36.219 "uuid": "0b419657-de55-4964-98dc-732118428f55", 00:04:36.219 "assigned_rate_limits": { 00:04:36.219 "rw_ios_per_sec": 0, 00:04:36.219 "rw_mbytes_per_sec": 0, 00:04:36.219 "r_mbytes_per_sec": 0, 00:04:36.219 "w_mbytes_per_sec": 0 00:04:36.219 }, 00:04:36.219 "claimed": false, 00:04:36.219 "zoned": false, 00:04:36.219 "supported_io_types": { 00:04:36.219 "read": true, 00:04:36.219 "write": true, 00:04:36.219 "unmap": true, 00:04:36.219 "flush": true, 00:04:36.219 "reset": true, 00:04:36.219 "nvme_admin": false, 00:04:36.219 "nvme_io": false, 00:04:36.219 "nvme_io_md": false, 00:04:36.219 "write_zeroes": true, 00:04:36.219 "zcopy": true, 00:04:36.219 "get_zone_info": false, 00:04:36.219 "zone_management": false, 00:04:36.219 "zone_append": false, 00:04:36.219 "compare": false, 00:04:36.219 "compare_and_write": false, 00:04:36.219 "abort": true, 00:04:36.219 "seek_hole": false, 00:04:36.219 "seek_data": false, 00:04:36.219 "copy": true, 00:04:36.219 "nvme_iov_md": false 00:04:36.219 }, 00:04:36.219 "memory_domains": [ 00:04:36.219 { 00:04:36.219 "dma_device_id": "system", 00:04:36.219 "dma_device_type": 1 00:04:36.219 }, 00:04:36.219 { 00:04:36.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.219 "dma_device_type": 2 00:04:36.219 } 00:04:36.219 ], 00:04:36.219 "driver_specific": {} 00:04:36.219 } 00:04:36.219 ]' 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.219 19:11:59 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:36.219 19:11:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:36.478 19:12:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:36.478 00:04:36.478 real 0m0.145s 00:04:36.478 user 0m0.087s 00:04:36.478 sys 0m0.020s 00:04:36.478 19:12:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.478 19:12:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 ************************************ 00:04:36.478 END TEST rpc_plugins 00:04:36.478 ************************************ 00:04:36.478 19:12:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:36.478 19:12:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.478 19:12:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.478 19:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 ************************************ 00:04:36.478 START TEST rpc_trace_cmd_test 00:04:36.478 ************************************ 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:36.478 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1904931", 00:04:36.478 "tpoint_group_mask": "0x8", 00:04:36.478 "iscsi_conn": { 00:04:36.478 "mask": "0x2", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "scsi": { 00:04:36.478 "mask": "0x4", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "bdev": { 00:04:36.478 "mask": "0x8", 00:04:36.478 "tpoint_mask": "0xffffffffffffffff" 00:04:36.478 }, 00:04:36.478 "nvmf_rdma": { 00:04:36.478 "mask": "0x10", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "nvmf_tcp": { 00:04:36.478 "mask": "0x20", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "ftl": { 00:04:36.478 "mask": "0x40", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "blobfs": { 00:04:36.478 "mask": "0x80", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "dsa": { 00:04:36.478 "mask": "0x200", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "thread": { 00:04:36.478 "mask": "0x400", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "nvme_pcie": { 00:04:36.478 "mask": "0x800", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "iaa": { 00:04:36.478 "mask": "0x1000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "nvme_tcp": { 00:04:36.478 "mask": "0x2000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "bdev_nvme": { 00:04:36.478 "mask": "0x4000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "sock": { 00:04:36.478 "mask": "0x8000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "blob": { 00:04:36.478 "mask": "0x10000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "bdev_raid": { 00:04:36.478 "mask": "0x20000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 }, 00:04:36.478 "scheduler": { 00:04:36.478 "mask": "0x40000", 00:04:36.478 "tpoint_mask": "0x0" 00:04:36.478 } 00:04:36.478 }' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:36.478 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:36.738 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:36.738 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:36.738 19:12:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:36.738 00:04:36.738 real 0m0.236s 00:04:36.738 user 0m0.198s 00:04:36.738 sys 0m0.029s 00:04:36.738 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.738 19:12:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 ************************************ 00:04:36.738 END TEST rpc_trace_cmd_test 00:04:36.738 ************************************ 00:04:36.738 19:12:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:36.738 19:12:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:36.738 19:12:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:36.738 19:12:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:36.738 19:12:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:36.738 19:12:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 ************************************ 00:04:36.738 START TEST rpc_daemon_integrity 00:04:36.738 ************************************ 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.738 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:36.738 { 00:04:36.738 "name": "Malloc2", 00:04:36.738 "aliases": [ 00:04:36.738 "e30128f0-66e3-4847-afb8-745893e9d249" 00:04:36.738 ], 00:04:36.738 "product_name": "Malloc disk", 00:04:36.738 "block_size": 512, 00:04:36.738 "num_blocks": 16384, 00:04:36.738 "uuid": "e30128f0-66e3-4847-afb8-745893e9d249", 00:04:36.738 "assigned_rate_limits": { 00:04:36.738 "rw_ios_per_sec": 0, 00:04:36.738 "rw_mbytes_per_sec": 0, 00:04:36.738 "r_mbytes_per_sec": 0, 00:04:36.738 "w_mbytes_per_sec": 0 00:04:36.738 }, 00:04:36.738 "claimed": false, 00:04:36.738 "zoned": false, 00:04:36.738 "supported_io_types": { 00:04:36.738 "read": true, 00:04:36.738 "write": true, 00:04:36.738 "unmap": true, 00:04:36.738 "flush": true, 00:04:36.738 "reset": true, 00:04:36.738 "nvme_admin": false, 00:04:36.738 "nvme_io": false, 00:04:36.738 "nvme_io_md": false, 00:04:36.738 "write_zeroes": true, 00:04:36.738 "zcopy": true, 00:04:36.738 "get_zone_info": false, 00:04:36.738 "zone_management": false, 00:04:36.738 "zone_append": false, 00:04:36.738 "compare": false, 00:04:36.738 "compare_and_write": false, 00:04:36.738 "abort": true, 00:04:36.738 "seek_hole": false, 00:04:36.738 "seek_data": false, 00:04:36.738 "copy": true, 00:04:36.738 "nvme_iov_md": false 00:04:36.738 }, 00:04:36.738 "memory_domains": [ 00:04:36.738 { 00:04:36.738 "dma_device_id": "system", 00:04:36.738 "dma_device_type": 1 00:04:36.738 }, 00:04:36.738 { 00:04:36.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.738 "dma_device_type": 2 00:04:36.738 } 00:04:36.738 ], 00:04:36.738 "driver_specific": {} 00:04:36.738 } 00:04:36.739 ]' 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.739 [2024-10-17 19:12:00.509608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:36.739 [2024-10-17 19:12:00.509640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:36.739 [2024-10-17 19:12:00.509655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f7330 00:04:36.739 [2024-10-17 19:12:00.509662] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:36.739 [2024-10-17 19:12:00.510735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:36.739 [2024-10-17 19:12:00.510756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:36.739 Passthru0 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.739 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.998 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.998 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:36.998 { 00:04:36.998 "name": "Malloc2", 00:04:36.998 "aliases": [ 00:04:36.998 "e30128f0-66e3-4847-afb8-745893e9d249" 00:04:36.998 ], 00:04:36.998 "product_name": "Malloc disk", 00:04:36.998 "block_size": 512, 00:04:36.998 "num_blocks": 16384, 00:04:36.998 "uuid": "e30128f0-66e3-4847-afb8-745893e9d249", 00:04:36.998 "assigned_rate_limits": { 00:04:36.998 "rw_ios_per_sec": 0, 00:04:36.998 "rw_mbytes_per_sec": 0, 00:04:36.998 "r_mbytes_per_sec": 0, 00:04:36.999 "w_mbytes_per_sec": 0 00:04:36.999 }, 00:04:36.999 "claimed": true, 00:04:36.999 "claim_type": "exclusive_write", 00:04:36.999 "zoned": false, 00:04:36.999 "supported_io_types": { 00:04:36.999 "read": true, 00:04:36.999 "write": true, 00:04:36.999 "unmap": true, 00:04:36.999 "flush": true, 00:04:36.999 "reset": true, 00:04:36.999 "nvme_admin": false, 00:04:36.999 "nvme_io": false, 00:04:36.999 "nvme_io_md": false, 00:04:36.999 "write_zeroes": true, 00:04:36.999 "zcopy": true, 00:04:36.999 "get_zone_info": false, 00:04:36.999 "zone_management": false, 00:04:36.999 "zone_append": false, 00:04:36.999 "compare": false, 00:04:36.999 "compare_and_write": false, 00:04:36.999 "abort": true, 00:04:36.999 "seek_hole": false, 00:04:36.999 "seek_data": false, 00:04:36.999 "copy": true, 00:04:36.999 "nvme_iov_md": false 00:04:36.999 }, 00:04:36.999 "memory_domains": [ 00:04:36.999 { 00:04:36.999 "dma_device_id": "system", 00:04:36.999 "dma_device_type": 1 00:04:36.999 }, 00:04:36.999 { 00:04:36.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.999 "dma_device_type": 2 00:04:36.999 } 00:04:36.999 ], 00:04:36.999 "driver_specific": {} 00:04:36.999 }, 00:04:36.999 { 00:04:36.999 "name": "Passthru0", 00:04:36.999 "aliases": [ 00:04:36.999 "533e8f27-2ff9-5059-855f-ecd62dfb8b3f" 00:04:36.999 ], 00:04:36.999 "product_name": "passthru", 00:04:36.999 "block_size": 512, 00:04:36.999 "num_blocks": 16384, 00:04:36.999 "uuid": "533e8f27-2ff9-5059-855f-ecd62dfb8b3f", 00:04:36.999 "assigned_rate_limits": { 00:04:36.999 "rw_ios_per_sec": 0, 00:04:36.999 "rw_mbytes_per_sec": 0, 00:04:36.999 "r_mbytes_per_sec": 0, 00:04:36.999 "w_mbytes_per_sec": 0 00:04:36.999 }, 00:04:36.999 "claimed": false, 00:04:36.999 "zoned": false, 00:04:36.999 "supported_io_types": { 00:04:36.999 "read": true, 00:04:36.999 "write": true, 00:04:36.999 "unmap": true, 00:04:36.999 "flush": true, 00:04:36.999 "reset": true, 00:04:36.999 "nvme_admin": false, 00:04:36.999 "nvme_io": false, 00:04:36.999 "nvme_io_md": false, 00:04:36.999 "write_zeroes": true, 00:04:36.999 "zcopy": true, 00:04:36.999 "get_zone_info": false, 00:04:36.999 "zone_management": false, 00:04:36.999 "zone_append": false, 00:04:36.999 "compare": false, 00:04:36.999 "compare_and_write": false, 00:04:36.999 "abort": true, 00:04:36.999 "seek_hole": false, 00:04:36.999 "seek_data": false, 00:04:36.999 "copy": true, 00:04:36.999 "nvme_iov_md": false 00:04:36.999 }, 00:04:36.999 "memory_domains": [ 00:04:36.999 { 00:04:36.999 "dma_device_id": "system", 00:04:36.999 "dma_device_type": 1 00:04:36.999 }, 00:04:36.999 { 00:04:36.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:36.999 "dma_device_type": 2 00:04:36.999 } 00:04:36.999 ], 00:04:36.999 "driver_specific": { 00:04:36.999 "passthru": { 00:04:36.999 "name": "Passthru0", 00:04:36.999 "base_bdev_name": "Malloc2" 00:04:36.999 } 00:04:36.999 } 00:04:36.999 } 00:04:36.999 ]' 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.999 00:04:36.999 real 0m0.263s 00:04:36.999 user 0m0.158s 00:04:36.999 sys 0m0.043s 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.999 19:12:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.999 ************************************ 00:04:36.999 END TEST rpc_daemon_integrity 00:04:36.999 ************************************ 00:04:36.999 19:12:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.999 19:12:00 rpc -- rpc/rpc.sh@84 -- # killprocess 1904931 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@950 -- # '[' -z 1904931 ']' 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@954 -- # kill -0 1904931 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@955 -- # uname 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1904931 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1904931' 00:04:36.999 killing process with pid 1904931 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@969 -- # kill 1904931 00:04:36.999 19:12:00 rpc -- common/autotest_common.sh@974 -- # wait 1904931 00:04:37.259 00:04:37.259 real 0m2.086s 00:04:37.259 user 0m2.663s 00:04:37.259 sys 0m0.705s 00:04:37.259 19:12:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.259 19:12:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.259 ************************************ 00:04:37.259 END TEST rpc 00:04:37.259 ************************************ 00:04:37.519 19:12:01 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.519 19:12:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.519 19:12:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.519 19:12:01 -- common/autotest_common.sh@10 -- # set +x 00:04:37.519 ************************************ 00:04:37.519 START TEST skip_rpc 00:04:37.519 ************************************ 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.519 * Looking for test storage... 00:04:37.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.519 19:12:01 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.519 --rc genhtml_branch_coverage=1 00:04:37.519 --rc genhtml_function_coverage=1 00:04:37.519 --rc genhtml_legend=1 00:04:37.519 --rc geninfo_all_blocks=1 00:04:37.519 --rc geninfo_unexecuted_blocks=1 00:04:37.519 00:04:37.519 ' 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.519 --rc genhtml_branch_coverage=1 00:04:37.519 --rc genhtml_function_coverage=1 00:04:37.519 --rc genhtml_legend=1 00:04:37.519 --rc geninfo_all_blocks=1 00:04:37.519 --rc geninfo_unexecuted_blocks=1 00:04:37.519 00:04:37.519 ' 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.519 --rc genhtml_branch_coverage=1 00:04:37.519 --rc genhtml_function_coverage=1 00:04:37.519 --rc genhtml_legend=1 00:04:37.519 --rc geninfo_all_blocks=1 00:04:37.519 --rc geninfo_unexecuted_blocks=1 00:04:37.519 00:04:37.519 ' 00:04:37.519 19:12:01 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.519 --rc genhtml_branch_coverage=1 00:04:37.519 --rc genhtml_function_coverage=1 00:04:37.519 --rc genhtml_legend=1 00:04:37.519 --rc geninfo_all_blocks=1 00:04:37.519 --rc geninfo_unexecuted_blocks=1 00:04:37.519 00:04:37.519 ' 00:04:37.520 19:12:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:37.520 19:12:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:37.520 19:12:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.520 19:12:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.520 19:12:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.520 19:12:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.778 ************************************ 00:04:37.778 START TEST skip_rpc 00:04:37.778 ************************************ 00:04:37.778 19:12:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:37.778 19:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1905485 00:04:37.778 19:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.778 19:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.778 19:12:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.778 [2024-10-17 19:12:01.368175] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:37.778 [2024-10-17 19:12:01.368221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1905485 ] 00:04:37.778 [2024-10-17 19:12:01.445448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.778 [2024-10-17 19:12:01.486944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.064 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1905485 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1905485 ']' 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1905485 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1905485 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1905485' 00:04:43.065 killing process with pid 1905485 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1905485 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1905485 00:04:43.065 00:04:43.065 real 0m5.361s 00:04:43.065 user 0m5.130s 00:04:43.065 sys 0m0.267s 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.065 19:12:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 ************************************ 00:04:43.065 END TEST skip_rpc 00:04:43.065 ************************************ 00:04:43.065 19:12:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.065 19:12:06 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.065 19:12:06 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.065 19:12:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 ************************************ 00:04:43.065 START TEST skip_rpc_with_json 00:04:43.065 ************************************ 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1906459 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1906459 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1906459 ']' 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:43.065 19:12:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.065 [2024-10-17 19:12:06.798534] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:43.065 [2024-10-17 19:12:06.798574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1906459 ] 00:04:43.356 [2024-10-17 19:12:06.873645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.356 [2024-10-17 19:12:06.915589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.656 [2024-10-17 19:12:07.132409] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:43.656 request: 00:04:43.656 { 00:04:43.656 "trtype": "tcp", 00:04:43.656 "method": "nvmf_get_transports", 00:04:43.656 "req_id": 1 00:04:43.656 } 00:04:43.656 Got JSON-RPC error response 00:04:43.656 response: 00:04:43.656 { 00:04:43.656 "code": -19, 00:04:43.656 "message": "No such device" 00:04:43.656 } 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.656 [2024-10-17 19:12:07.144516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.656 { 00:04:43.656 "subsystems": [ 00:04:43.656 { 00:04:43.656 "subsystem": "fsdev", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "fsdev_set_opts", 00:04:43.656 "params": { 00:04:43.656 "fsdev_io_pool_size": 65535, 00:04:43.656 "fsdev_io_cache_size": 256 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "vfio_user_target", 00:04:43.656 "config": null 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "keyring", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "iobuf", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "iobuf_set_options", 00:04:43.656 "params": { 00:04:43.656 "small_pool_count": 8192, 00:04:43.656 "large_pool_count": 1024, 00:04:43.656 "small_bufsize": 8192, 00:04:43.656 "large_bufsize": 135168, 00:04:43.656 "enable_numa": false 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "sock", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "sock_set_default_impl", 00:04:43.656 "params": { 00:04:43.656 "impl_name": "posix" 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "sock_impl_set_options", 00:04:43.656 "params": { 00:04:43.656 "impl_name": "ssl", 00:04:43.656 "recv_buf_size": 4096, 00:04:43.656 "send_buf_size": 4096, 00:04:43.656 "enable_recv_pipe": true, 00:04:43.656 "enable_quickack": false, 00:04:43.656 "enable_placement_id": 0, 00:04:43.656 "enable_zerocopy_send_server": true, 00:04:43.656 "enable_zerocopy_send_client": false, 00:04:43.656 "zerocopy_threshold": 0, 00:04:43.656 "tls_version": 0, 00:04:43.656 "enable_ktls": false 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "sock_impl_set_options", 00:04:43.656 "params": { 00:04:43.656 "impl_name": "posix", 00:04:43.656 "recv_buf_size": 2097152, 00:04:43.656 "send_buf_size": 2097152, 00:04:43.656 "enable_recv_pipe": true, 00:04:43.656 "enable_quickack": false, 00:04:43.656 "enable_placement_id": 0, 00:04:43.656 "enable_zerocopy_send_server": true, 00:04:43.656 "enable_zerocopy_send_client": false, 00:04:43.656 "zerocopy_threshold": 0, 00:04:43.656 "tls_version": 0, 00:04:43.656 "enable_ktls": false 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "vmd", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "accel", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "accel_set_options", 00:04:43.656 "params": { 00:04:43.656 "small_cache_size": 128, 00:04:43.656 "large_cache_size": 16, 00:04:43.656 "task_count": 2048, 00:04:43.656 "sequence_count": 2048, 00:04:43.656 "buf_count": 2048 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "bdev", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "bdev_set_options", 00:04:43.656 "params": { 00:04:43.656 "bdev_io_pool_size": 65535, 00:04:43.656 "bdev_io_cache_size": 256, 00:04:43.656 "bdev_auto_examine": true, 00:04:43.656 "iobuf_small_cache_size": 128, 00:04:43.656 "iobuf_large_cache_size": 16 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "bdev_raid_set_options", 00:04:43.656 "params": { 00:04:43.656 "process_window_size_kb": 1024, 00:04:43.656 "process_max_bandwidth_mb_sec": 0 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "bdev_iscsi_set_options", 00:04:43.656 "params": { 00:04:43.656 "timeout_sec": 30 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "bdev_nvme_set_options", 00:04:43.656 "params": { 00:04:43.656 "action_on_timeout": "none", 00:04:43.656 "timeout_us": 0, 00:04:43.656 "timeout_admin_us": 0, 00:04:43.656 "keep_alive_timeout_ms": 10000, 00:04:43.656 "arbitration_burst": 0, 00:04:43.656 "low_priority_weight": 0, 00:04:43.656 "medium_priority_weight": 0, 00:04:43.656 "high_priority_weight": 0, 00:04:43.656 "nvme_adminq_poll_period_us": 10000, 00:04:43.656 "nvme_ioq_poll_period_us": 0, 00:04:43.656 "io_queue_requests": 0, 00:04:43.656 "delay_cmd_submit": true, 00:04:43.656 "transport_retry_count": 4, 00:04:43.656 "bdev_retry_count": 3, 00:04:43.656 "transport_ack_timeout": 0, 00:04:43.656 "ctrlr_loss_timeout_sec": 0, 00:04:43.656 "reconnect_delay_sec": 0, 00:04:43.656 "fast_io_fail_timeout_sec": 0, 00:04:43.656 "disable_auto_failback": false, 00:04:43.656 "generate_uuids": false, 00:04:43.656 "transport_tos": 0, 00:04:43.656 "nvme_error_stat": false, 00:04:43.656 "rdma_srq_size": 0, 00:04:43.656 "io_path_stat": false, 00:04:43.656 "allow_accel_sequence": false, 00:04:43.656 "rdma_max_cq_size": 0, 00:04:43.656 "rdma_cm_event_timeout_ms": 0, 00:04:43.656 "dhchap_digests": [ 00:04:43.656 "sha256", 00:04:43.656 "sha384", 00:04:43.656 "sha512" 00:04:43.656 ], 00:04:43.656 "dhchap_dhgroups": [ 00:04:43.656 "null", 00:04:43.656 "ffdhe2048", 00:04:43.656 "ffdhe3072", 00:04:43.656 "ffdhe4096", 00:04:43.656 "ffdhe6144", 00:04:43.656 "ffdhe8192" 00:04:43.656 ] 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "bdev_nvme_set_hotplug", 00:04:43.656 "params": { 00:04:43.656 "period_us": 100000, 00:04:43.656 "enable": false 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "bdev_wait_for_examine" 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "scsi", 00:04:43.656 "config": null 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "scheduler", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "framework_set_scheduler", 00:04:43.656 "params": { 00:04:43.656 "name": "static" 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "vhost_scsi", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "vhost_blk", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "ublk", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "nbd", 00:04:43.656 "config": [] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "nvmf", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "nvmf_set_config", 00:04:43.656 "params": { 00:04:43.656 "discovery_filter": "match_any", 00:04:43.656 "admin_cmd_passthru": { 00:04:43.656 "identify_ctrlr": false 00:04:43.656 }, 00:04:43.656 "dhchap_digests": [ 00:04:43.656 "sha256", 00:04:43.656 "sha384", 00:04:43.656 "sha512" 00:04:43.656 ], 00:04:43.656 "dhchap_dhgroups": [ 00:04:43.656 "null", 00:04:43.656 "ffdhe2048", 00:04:43.656 "ffdhe3072", 00:04:43.656 "ffdhe4096", 00:04:43.656 "ffdhe6144", 00:04:43.656 "ffdhe8192" 00:04:43.656 ] 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "nvmf_set_max_subsystems", 00:04:43.656 "params": { 00:04:43.656 "max_subsystems": 1024 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "nvmf_set_crdt", 00:04:43.656 "params": { 00:04:43.656 "crdt1": 0, 00:04:43.656 "crdt2": 0, 00:04:43.656 "crdt3": 0 00:04:43.656 } 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "method": "nvmf_create_transport", 00:04:43.656 "params": { 00:04:43.656 "trtype": "TCP", 00:04:43.656 "max_queue_depth": 128, 00:04:43.656 "max_io_qpairs_per_ctrlr": 127, 00:04:43.656 "in_capsule_data_size": 4096, 00:04:43.656 "max_io_size": 131072, 00:04:43.656 "io_unit_size": 131072, 00:04:43.656 "max_aq_depth": 128, 00:04:43.656 "num_shared_buffers": 511, 00:04:43.656 "buf_cache_size": 4294967295, 00:04:43.656 "dif_insert_or_strip": false, 00:04:43.656 "zcopy": false, 00:04:43.656 "c2h_success": true, 00:04:43.656 "sock_priority": 0, 00:04:43.656 "abort_timeout_sec": 1, 00:04:43.656 "ack_timeout": 0, 00:04:43.656 "data_wr_pool_size": 0 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 }, 00:04:43.656 { 00:04:43.656 "subsystem": "iscsi", 00:04:43.656 "config": [ 00:04:43.656 { 00:04:43.656 "method": "iscsi_set_options", 00:04:43.656 "params": { 00:04:43.656 "node_base": "iqn.2016-06.io.spdk", 00:04:43.656 "max_sessions": 128, 00:04:43.656 "max_connections_per_session": 2, 00:04:43.656 "max_queue_depth": 64, 00:04:43.656 "default_time2wait": 2, 00:04:43.656 "default_time2retain": 20, 00:04:43.656 "first_burst_length": 8192, 00:04:43.656 "immediate_data": true, 00:04:43.656 "allow_duplicated_isid": false, 00:04:43.656 "error_recovery_level": 0, 00:04:43.656 "nop_timeout": 60, 00:04:43.656 "nop_in_interval": 30, 00:04:43.656 "disable_chap": false, 00:04:43.656 "require_chap": false, 00:04:43.656 "mutual_chap": false, 00:04:43.656 "chap_group": 0, 00:04:43.656 "max_large_datain_per_connection": 64, 00:04:43.656 "max_r2t_per_connection": 4, 00:04:43.656 "pdu_pool_size": 36864, 00:04:43.656 "immediate_data_pool_size": 16384, 00:04:43.656 "data_out_pool_size": 2048 00:04:43.656 } 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 } 00:04:43.656 ] 00:04:43.656 } 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1906459 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1906459 ']' 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1906459 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1906459 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1906459' 00:04:43.656 killing process with pid 1906459 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1906459 00:04:43.656 19:12:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1906459 00:04:43.949 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1906671 00:04:43.949 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.949 19:12:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1906671 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1906671 ']' 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1906671 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1906671 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1906671' 00:04:49.308 killing process with pid 1906671 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1906671 00:04:49.308 19:12:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1906671 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:49.308 00:04:49.308 real 0m6.281s 00:04:49.308 user 0m5.965s 00:04:49.308 sys 0m0.601s 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.308 ************************************ 00:04:49.308 END TEST skip_rpc_with_json 00:04:49.308 ************************************ 00:04:49.308 19:12:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:49.308 19:12:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.308 19:12:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.308 19:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.308 ************************************ 00:04:49.308 START TEST skip_rpc_with_delay 00:04:49.308 ************************************ 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.308 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.309 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.309 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.309 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.309 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.309 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:49.567 [2024-10-17 19:12:13.145953] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.567 00:04:49.567 real 0m0.070s 00:04:49.567 user 0m0.043s 00:04:49.567 sys 0m0.026s 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.567 19:12:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:49.567 ************************************ 00:04:49.567 END TEST skip_rpc_with_delay 00:04:49.567 ************************************ 00:04:49.567 19:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:49.567 19:12:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:49.567 19:12:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:49.567 19:12:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.567 19:12:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.567 19:12:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.567 ************************************ 00:04:49.567 START TEST exit_on_failed_rpc_init 00:04:49.567 ************************************ 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1908034 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1908034 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1908034 ']' 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.567 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.567 [2024-10-17 19:12:13.281741] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:49.567 [2024-10-17 19:12:13.281786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908034 ] 00:04:49.567 [2024-10-17 19:12:13.338376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.825 [2024-10-17 19:12:13.381518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:49.825 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:50.084 [2024-10-17 19:12:13.654890] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:50.084 [2024-10-17 19:12:13.654938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908061 ] 00:04:50.084 [2024-10-17 19:12:13.731417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.084 [2024-10-17 19:12:13.771667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.084 [2024-10-17 19:12:13.771720] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:50.084 [2024-10-17 19:12:13.771728] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:50.084 [2024-10-17 19:12:13.771735] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1908034 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1908034 ']' 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1908034 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1908034 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1908034' 00:04:50.084 killing process with pid 1908034 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1908034 00:04:50.084 19:12:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1908034 00:04:50.651 00:04:50.651 real 0m0.919s 00:04:50.651 user 0m0.998s 00:04:50.651 sys 0m0.365s 00:04:50.651 19:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.651 19:12:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 ************************************ 00:04:50.651 END TEST exit_on_failed_rpc_init 00:04:50.651 ************************************ 00:04:50.651 19:12:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:50.651 00:04:50.651 real 0m13.084s 00:04:50.651 user 0m12.345s 00:04:50.651 sys 0m1.534s 00:04:50.651 19:12:14 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.651 19:12:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 ************************************ 00:04:50.651 END TEST skip_rpc 00:04:50.651 ************************************ 00:04:50.651 19:12:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.651 19:12:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.651 19:12:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.651 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:04:50.651 ************************************ 00:04:50.651 START TEST rpc_client 00:04:50.651 ************************************ 00:04:50.651 19:12:14 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:50.651 * Looking for test storage... 00:04:50.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:50.651 19:12:14 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.651 19:12:14 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.651 19:12:14 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.651 19:12:14 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.651 19:12:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.651 19:12:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.651 19:12:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.652 19:12:14 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:50.652 19:12:14 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.652 19:12:14 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.652 --rc genhtml_branch_coverage=1 00:04:50.652 --rc genhtml_function_coverage=1 00:04:50.652 --rc genhtml_legend=1 00:04:50.652 --rc geninfo_all_blocks=1 00:04:50.652 --rc geninfo_unexecuted_blocks=1 00:04:50.652 00:04:50.652 ' 00:04:50.652 19:12:14 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.652 --rc genhtml_branch_coverage=1 00:04:50.652 --rc genhtml_function_coverage=1 00:04:50.652 --rc genhtml_legend=1 00:04:50.652 --rc geninfo_all_blocks=1 00:04:50.652 --rc geninfo_unexecuted_blocks=1 00:04:50.652 00:04:50.652 ' 00:04:50.652 19:12:14 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.652 --rc genhtml_branch_coverage=1 00:04:50.652 --rc genhtml_function_coverage=1 00:04:50.652 --rc genhtml_legend=1 00:04:50.652 --rc geninfo_all_blocks=1 00:04:50.652 --rc geninfo_unexecuted_blocks=1 00:04:50.652 00:04:50.652 ' 00:04:50.652 19:12:14 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.652 --rc genhtml_branch_coverage=1 00:04:50.652 --rc genhtml_function_coverage=1 00:04:50.652 --rc genhtml_legend=1 00:04:50.652 --rc geninfo_all_blocks=1 00:04:50.652 --rc geninfo_unexecuted_blocks=1 00:04:50.652 00:04:50.652 ' 00:04:50.652 19:12:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:50.912 OK 00:04:50.912 19:12:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:50.912 00:04:50.912 real 0m0.199s 00:04:50.912 user 0m0.116s 00:04:50.912 sys 0m0.096s 00:04:50.912 19:12:14 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.912 19:12:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 ************************************ 00:04:50.912 END TEST rpc_client 00:04:50.912 ************************************ 00:04:50.912 19:12:14 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.912 19:12:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.912 19:12:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.912 19:12:14 -- common/autotest_common.sh@10 -- # set +x 00:04:50.912 ************************************ 00:04:50.912 START TEST json_config 00:04:50.912 ************************************ 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.912 19:12:14 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.912 19:12:14 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.912 19:12:14 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.912 19:12:14 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.912 19:12:14 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.912 19:12:14 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:50.912 19:12:14 json_config -- scripts/common.sh@345 -- # : 1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.912 19:12:14 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.912 19:12:14 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@353 -- # local d=1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.912 19:12:14 json_config -- scripts/common.sh@355 -- # echo 1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.912 19:12:14 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@353 -- # local d=2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.912 19:12:14 json_config -- scripts/common.sh@355 -- # echo 2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.912 19:12:14 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.912 19:12:14 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.912 19:12:14 json_config -- scripts/common.sh@368 -- # return 0 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.912 --rc genhtml_branch_coverage=1 00:04:50.912 --rc genhtml_function_coverage=1 00:04:50.912 --rc genhtml_legend=1 00:04:50.912 --rc geninfo_all_blocks=1 00:04:50.912 --rc geninfo_unexecuted_blocks=1 00:04:50.912 00:04:50.912 ' 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.912 --rc genhtml_branch_coverage=1 00:04:50.912 --rc genhtml_function_coverage=1 00:04:50.912 --rc genhtml_legend=1 00:04:50.912 --rc geninfo_all_blocks=1 00:04:50.912 --rc geninfo_unexecuted_blocks=1 00:04:50.912 00:04:50.912 ' 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.912 --rc genhtml_branch_coverage=1 00:04:50.912 --rc genhtml_function_coverage=1 00:04:50.912 --rc genhtml_legend=1 00:04:50.912 --rc geninfo_all_blocks=1 00:04:50.912 --rc geninfo_unexecuted_blocks=1 00:04:50.912 00:04:50.912 ' 00:04:50.912 19:12:14 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:50.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.912 --rc genhtml_branch_coverage=1 00:04:50.912 --rc genhtml_function_coverage=1 00:04:50.912 --rc genhtml_legend=1 00:04:50.912 --rc geninfo_all_blocks=1 00:04:50.912 --rc geninfo_unexecuted_blocks=1 00:04:50.912 00:04:50.912 ' 00:04:50.912 19:12:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:50.912 19:12:14 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:50.912 19:12:14 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:50.912 19:12:14 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:50.912 19:12:14 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:50.912 19:12:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.912 19:12:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.912 19:12:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.912 19:12:14 json_config -- paths/export.sh@5 -- # export PATH 00:04:50.912 19:12:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@51 -- # : 0 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:50.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:50.912 19:12:14 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:51.172 INFO: JSON configuration test init 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:51.172 19:12:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.172 19:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:51.172 19:12:14 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.172 19:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.172 19:12:14 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:51.172 19:12:14 json_config -- json_config/common.sh@9 -- # local app=target 00:04:51.172 19:12:14 json_config -- json_config/common.sh@10 -- # shift 00:04:51.173 19:12:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.173 19:12:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.173 19:12:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.173 19:12:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.173 19:12:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.173 19:12:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1908394 00:04:51.173 19:12:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.173 Waiting for target to run... 00:04:51.173 19:12:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1908394 /var/tmp/spdk_tgt.sock 00:04:51.173 19:12:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@831 -- # '[' -z 1908394 ']' 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.173 19:12:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.173 [2024-10-17 19:12:14.766667] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:04:51.173 [2024-10-17 19:12:14.766718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908394 ] 00:04:51.432 [2024-10-17 19:12:15.056141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.432 [2024-10-17 19:12:15.092202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:52.000 19:12:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:52.000 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.000 19:12:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:52.000 19:12:15 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:52.000 19:12:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:55.290 19:12:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@54 -- # sort 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.290 19:12:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:55.290 19:12:18 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.290 19:12:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:55.549 MallocForNvmf0 00:04:55.549 19:12:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.549 19:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:55.808 MallocForNvmf1 00:04:55.808 19:12:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.808 19:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:55.808 [2024-10-17 19:12:19.540772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.808 19:12:19 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:55.808 19:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:56.067 19:12:19 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.067 19:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:56.325 19:12:19 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.325 19:12:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:56.585 19:12:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.585 19:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:56.585 [2024-10-17 19:12:20.323204] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:56.585 19:12:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:56.585 19:12:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.585 19:12:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.845 19:12:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:56.845 19:12:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.845 19:12:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.845 19:12:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:56.845 19:12:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.845 19:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:56.845 MallocBdevForConfigChangeCheck 00:04:57.103 19:12:20 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:57.103 19:12:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.103 19:12:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.103 19:12:20 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:57.103 19:12:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.362 19:12:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:57.362 INFO: shutting down applications... 00:04:57.362 19:12:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:57.362 19:12:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:57.362 19:12:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:57.362 19:12:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:59.898 Calling clear_iscsi_subsystem 00:04:59.898 Calling clear_nvmf_subsystem 00:04:59.898 Calling clear_nbd_subsystem 00:04:59.898 Calling clear_ublk_subsystem 00:04:59.898 Calling clear_vhost_blk_subsystem 00:04:59.898 Calling clear_vhost_scsi_subsystem 00:04:59.898 Calling clear_bdev_subsystem 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@352 -- # break 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:59.898 19:12:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:59.898 19:12:23 json_config -- json_config/common.sh@31 -- # local app=target 00:04:59.898 19:12:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:59.898 19:12:23 json_config -- json_config/common.sh@35 -- # [[ -n 1908394 ]] 00:04:59.898 19:12:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1908394 00:04:59.898 19:12:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:59.898 19:12:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:59.898 19:12:23 json_config -- json_config/common.sh@41 -- # kill -0 1908394 00:04:59.898 19:12:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.467 19:12:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.467 19:12:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.467 19:12:23 json_config -- json_config/common.sh@41 -- # kill -0 1908394 00:05:00.467 19:12:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:00.467 19:12:23 json_config -- json_config/common.sh@43 -- # break 00:05:00.467 19:12:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:00.467 19:12:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:00.467 SPDK target shutdown done 00:05:00.467 19:12:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:00.467 INFO: relaunching applications... 00:05:00.467 19:12:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.467 19:12:23 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.467 19:12:23 json_config -- json_config/common.sh@10 -- # shift 00:05:00.467 19:12:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.467 19:12:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.467 19:12:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.467 19:12:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.467 19:12:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.467 19:12:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1910130 00:05:00.467 19:12:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.467 Waiting for target to run... 00:05:00.467 19:12:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:00.467 19:12:23 json_config -- json_config/common.sh@25 -- # waitforlisten 1910130 /var/tmp/spdk_tgt.sock 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 1910130 ']' 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:00.467 19:12:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.467 [2024-10-17 19:12:24.028546] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:00.467 [2024-10-17 19:12:24.028615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1910130 ] 00:05:00.726 [2024-10-17 19:12:24.481242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.984 [2024-10-17 19:12:24.534697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.273 [2024-10-17 19:12:27.564277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.273 [2024-10-17 19:12:27.596645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.532 19:12:28 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.532 19:12:28 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:04.532 19:12:28 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.532 00:05:04.532 19:12:28 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:04.532 19:12:28 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.532 INFO: Checking if target configuration is the same... 00:05:04.532 19:12:28 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:04.532 19:12:28 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.532 19:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.532 + '[' 2 -ne 2 ']' 00:05:04.532 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:04.532 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:04.532 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:04.532 +++ basename /dev/fd/62 00:05:04.532 ++ mktemp /tmp/62.XXX 00:05:04.532 + tmp_file_1=/tmp/62.nn8 00:05:04.532 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:04.532 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.532 + tmp_file_2=/tmp/spdk_tgt_config.json.F1X 00:05:04.532 + ret=0 00:05:04.532 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.100 + diff -u /tmp/62.nn8 /tmp/spdk_tgt_config.json.F1X 00:05:05.100 + echo 'INFO: JSON config files are the same' 00:05:05.100 INFO: JSON config files are the same 00:05:05.100 + rm /tmp/62.nn8 /tmp/spdk_tgt_config.json.F1X 00:05:05.100 + exit 0 00:05:05.100 19:12:28 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:05.100 19:12:28 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.100 INFO: changing configuration and checking if this can be detected... 00:05:05.100 19:12:28 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.100 19:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.100 19:12:28 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.100 19:12:28 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:05.100 19:12:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.100 + '[' 2 -ne 2 ']' 00:05:05.100 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:05.100 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:05.100 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:05.100 +++ basename /dev/fd/62 00:05:05.100 ++ mktemp /tmp/62.XXX 00:05:05.100 + tmp_file_1=/tmp/62.j7R 00:05:05.100 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.100 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.100 + tmp_file_2=/tmp/spdk_tgt_config.json.PRt 00:05:05.100 + ret=0 00:05:05.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.669 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:05.669 + diff -u /tmp/62.j7R /tmp/spdk_tgt_config.json.PRt 00:05:05.669 + ret=1 00:05:05.669 + echo '=== Start of file: /tmp/62.j7R ===' 00:05:05.669 + cat /tmp/62.j7R 00:05:05.669 + echo '=== End of file: /tmp/62.j7R ===' 00:05:05.669 + echo '' 00:05:05.669 + echo '=== Start of file: /tmp/spdk_tgt_config.json.PRt ===' 00:05:05.669 + cat /tmp/spdk_tgt_config.json.PRt 00:05:05.669 + echo '=== End of file: /tmp/spdk_tgt_config.json.PRt ===' 00:05:05.669 + echo '' 00:05:05.669 + rm /tmp/62.j7R /tmp/spdk_tgt_config.json.PRt 00:05:05.669 + exit 1 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:05.669 INFO: configuration change detected. 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@324 -- # [[ -n 1910130 ]] 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.669 19:12:29 json_config -- json_config/json_config.sh@330 -- # killprocess 1910130 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@950 -- # '[' -z 1910130 ']' 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@954 -- # kill -0 1910130 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@955 -- # uname 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1910130 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1910130' 00:05:05.669 killing process with pid 1910130 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@969 -- # kill 1910130 00:05:05.669 19:12:29 json_config -- common/autotest_common.sh@974 -- # wait 1910130 00:05:08.206 19:12:31 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.206 19:12:31 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:08.206 19:12:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:08.206 19:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.206 19:12:31 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:08.206 19:12:31 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:08.206 INFO: Success 00:05:08.206 00:05:08.206 real 0m16.974s 00:05:08.206 user 0m17.563s 00:05:08.206 sys 0m2.612s 00:05:08.206 19:12:31 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.206 19:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.206 ************************************ 00:05:08.206 END TEST json_config 00:05:08.206 ************************************ 00:05:08.206 19:12:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.206 19:12:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.206 19:12:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.206 19:12:31 -- common/autotest_common.sh@10 -- # set +x 00:05:08.206 ************************************ 00:05:08.206 START TEST json_config_extra_key 00:05:08.206 ************************************ 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:08.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.206 --rc genhtml_branch_coverage=1 00:05:08.206 --rc genhtml_function_coverage=1 00:05:08.206 --rc genhtml_legend=1 00:05:08.206 --rc geninfo_all_blocks=1 00:05:08.206 --rc geninfo_unexecuted_blocks=1 00:05:08.206 00:05:08.206 ' 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:08.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.206 --rc genhtml_branch_coverage=1 00:05:08.206 --rc genhtml_function_coverage=1 00:05:08.206 --rc genhtml_legend=1 00:05:08.206 --rc geninfo_all_blocks=1 00:05:08.206 --rc geninfo_unexecuted_blocks=1 00:05:08.206 00:05:08.206 ' 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:08.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.206 --rc genhtml_branch_coverage=1 00:05:08.206 --rc genhtml_function_coverage=1 00:05:08.206 --rc genhtml_legend=1 00:05:08.206 --rc geninfo_all_blocks=1 00:05:08.206 --rc geninfo_unexecuted_blocks=1 00:05:08.206 00:05:08.206 ' 00:05:08.206 19:12:31 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:08.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.206 --rc genhtml_branch_coverage=1 00:05:08.206 --rc genhtml_function_coverage=1 00:05:08.206 --rc genhtml_legend=1 00:05:08.206 --rc geninfo_all_blocks=1 00:05:08.206 --rc geninfo_unexecuted_blocks=1 00:05:08.206 00:05:08.206 ' 00:05:08.206 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.206 19:12:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.206 19:12:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.206 19:12:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.206 19:12:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.206 19:12:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:08.206 19:12:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.206 19:12:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.207 19:12:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:08.207 INFO: launching applications... 00:05:08.207 19:12:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1911603 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.207 Waiting for target to run... 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1911603 /var/tmp/spdk_tgt.sock 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1911603 ']' 00:05:08.207 19:12:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.207 19:12:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.207 [2024-10-17 19:12:31.807295] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:08.207 [2024-10-17 19:12:31.807345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911603 ] 00:05:08.466 [2024-10-17 19:12:32.096991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.466 [2024-10-17 19:12:32.130676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.035 19:12:32 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.035 19:12:32 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:09.035 00:05:09.035 19:12:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:09.035 INFO: shutting down applications... 00:05:09.035 19:12:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1911603 ]] 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1911603 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1911603 00:05:09.035 19:12:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.603 19:12:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.603 19:12:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.603 19:12:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1911603 00:05:09.603 19:12:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:09.604 19:12:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:09.604 19:12:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:09.604 19:12:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:09.604 SPDK target shutdown done 00:05:09.604 19:12:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:09.604 Success 00:05:09.604 00:05:09.604 real 0m1.570s 00:05:09.604 user 0m1.328s 00:05:09.604 sys 0m0.413s 00:05:09.604 19:12:33 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.604 19:12:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 ************************************ 00:05:09.604 END TEST json_config_extra_key 00:05:09.604 ************************************ 00:05:09.604 19:12:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.604 19:12:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.604 19:12:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.604 19:12:33 -- common/autotest_common.sh@10 -- # set +x 00:05:09.604 ************************************ 00:05:09.604 START TEST alias_rpc 00:05:09.604 ************************************ 00:05:09.604 19:12:33 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:09.604 * Looking for test storage... 00:05:09.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:09.604 19:12:33 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:09.604 19:12:33 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:09.604 19:12:33 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:09.604 19:12:33 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.604 19:12:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.863 19:12:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.863 --rc genhtml_branch_coverage=1 00:05:09.863 --rc genhtml_function_coverage=1 00:05:09.863 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.863 --rc genhtml_branch_coverage=1 00:05:09.863 --rc genhtml_function_coverage=1 00:05:09.863 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.863 --rc genhtml_branch_coverage=1 00:05:09.863 --rc genhtml_function_coverage=1 00:05:09.863 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:09.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.863 --rc genhtml_branch_coverage=1 00:05:09.863 --rc genhtml_function_coverage=1 00:05:09.863 --rc genhtml_legend=1 00:05:09.863 --rc geninfo_all_blocks=1 00:05:09.863 --rc geninfo_unexecuted_blocks=1 00:05:09.863 00:05:09.863 ' 00:05:09.863 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:09.863 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1911929 00:05:09.863 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.863 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1911929 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1911929 ']' 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.863 19:12:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.863 [2024-10-17 19:12:33.443678] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:09.863 [2024-10-17 19:12:33.443727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1911929 ] 00:05:09.863 [2024-10-17 19:12:33.520217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.863 [2024-10-17 19:12:33.561982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.122 19:12:33 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.122 19:12:33 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:10.122 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:10.382 19:12:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1911929 00:05:10.382 19:12:33 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1911929 ']' 00:05:10.382 19:12:33 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1911929 00:05:10.382 19:12:33 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:10.382 19:12:33 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.382 19:12:33 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1911929 00:05:10.382 19:12:34 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.382 19:12:34 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.382 19:12:34 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1911929' 00:05:10.382 killing process with pid 1911929 00:05:10.382 19:12:34 alias_rpc -- common/autotest_common.sh@969 -- # kill 1911929 00:05:10.382 19:12:34 alias_rpc -- common/autotest_common.sh@974 -- # wait 1911929 00:05:10.641 00:05:10.641 real 0m1.120s 00:05:10.641 user 0m1.143s 00:05:10.641 sys 0m0.399s 00:05:10.641 19:12:34 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.641 19:12:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.641 ************************************ 00:05:10.641 END TEST alias_rpc 00:05:10.641 ************************************ 00:05:10.641 19:12:34 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:10.641 19:12:34 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.641 19:12:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.641 19:12:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.641 19:12:34 -- common/autotest_common.sh@10 -- # set +x 00:05:10.641 ************************************ 00:05:10.641 START TEST spdkcli_tcp 00:05:10.641 ************************************ 00:05:10.641 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:10.901 * Looking for test storage... 00:05:10.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.901 19:12:34 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.901 --rc genhtml_branch_coverage=1 00:05:10.901 --rc genhtml_function_coverage=1 00:05:10.901 --rc genhtml_legend=1 00:05:10.901 --rc geninfo_all_blocks=1 00:05:10.901 --rc geninfo_unexecuted_blocks=1 00:05:10.901 00:05:10.901 ' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.901 --rc genhtml_branch_coverage=1 00:05:10.901 --rc genhtml_function_coverage=1 00:05:10.901 --rc genhtml_legend=1 00:05:10.901 --rc geninfo_all_blocks=1 00:05:10.901 --rc geninfo_unexecuted_blocks=1 00:05:10.901 00:05:10.901 ' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.901 --rc genhtml_branch_coverage=1 00:05:10.901 --rc genhtml_function_coverage=1 00:05:10.901 --rc genhtml_legend=1 00:05:10.901 --rc geninfo_all_blocks=1 00:05:10.901 --rc geninfo_unexecuted_blocks=1 00:05:10.901 00:05:10.901 ' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.901 --rc genhtml_branch_coverage=1 00:05:10.901 --rc genhtml_function_coverage=1 00:05:10.901 --rc genhtml_legend=1 00:05:10.901 --rc geninfo_all_blocks=1 00:05:10.901 --rc geninfo_unexecuted_blocks=1 00:05:10.901 00:05:10.901 ' 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1912217 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.901 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1912217 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1912217 ']' 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.901 19:12:34 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.902 19:12:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.902 [2024-10-17 19:12:34.637419] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:10.902 [2024-10-17 19:12:34.637467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912217 ] 00:05:11.162 [2024-10-17 19:12:34.712786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.162 [2024-10-17 19:12:34.755789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.162 [2024-10-17 19:12:34.755791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.422 19:12:34 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.422 19:12:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.422 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1912222 00:05:11.422 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.422 19:12:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.422 [ 00:05:11.422 "bdev_malloc_delete", 00:05:11.422 "bdev_malloc_create", 00:05:11.422 "bdev_null_resize", 00:05:11.422 "bdev_null_delete", 00:05:11.422 "bdev_null_create", 00:05:11.422 "bdev_nvme_cuse_unregister", 00:05:11.422 "bdev_nvme_cuse_register", 00:05:11.422 "bdev_opal_new_user", 00:05:11.422 "bdev_opal_set_lock_state", 00:05:11.422 "bdev_opal_delete", 00:05:11.422 "bdev_opal_get_info", 00:05:11.422 "bdev_opal_create", 00:05:11.422 "bdev_nvme_opal_revert", 00:05:11.422 "bdev_nvme_opal_init", 00:05:11.422 "bdev_nvme_send_cmd", 00:05:11.422 "bdev_nvme_set_keys", 00:05:11.422 "bdev_nvme_get_path_iostat", 00:05:11.422 "bdev_nvme_get_mdns_discovery_info", 00:05:11.422 "bdev_nvme_stop_mdns_discovery", 00:05:11.422 "bdev_nvme_start_mdns_discovery", 00:05:11.422 "bdev_nvme_set_multipath_policy", 00:05:11.422 "bdev_nvme_set_preferred_path", 00:05:11.422 "bdev_nvme_get_io_paths", 00:05:11.422 "bdev_nvme_remove_error_injection", 00:05:11.422 "bdev_nvme_add_error_injection", 00:05:11.422 "bdev_nvme_get_discovery_info", 00:05:11.422 "bdev_nvme_stop_discovery", 00:05:11.422 "bdev_nvme_start_discovery", 00:05:11.422 "bdev_nvme_get_controller_health_info", 00:05:11.422 "bdev_nvme_disable_controller", 00:05:11.422 "bdev_nvme_enable_controller", 00:05:11.422 "bdev_nvme_reset_controller", 00:05:11.422 "bdev_nvme_get_transport_statistics", 00:05:11.422 "bdev_nvme_apply_firmware", 00:05:11.422 "bdev_nvme_detach_controller", 00:05:11.422 "bdev_nvme_get_controllers", 00:05:11.422 "bdev_nvme_attach_controller", 00:05:11.422 "bdev_nvme_set_hotplug", 00:05:11.422 "bdev_nvme_set_options", 00:05:11.422 "bdev_passthru_delete", 00:05:11.422 "bdev_passthru_create", 00:05:11.422 "bdev_lvol_set_parent_bdev", 00:05:11.422 "bdev_lvol_set_parent", 00:05:11.422 "bdev_lvol_check_shallow_copy", 00:05:11.422 "bdev_lvol_start_shallow_copy", 00:05:11.422 "bdev_lvol_grow_lvstore", 00:05:11.422 "bdev_lvol_get_lvols", 00:05:11.422 "bdev_lvol_get_lvstores", 00:05:11.422 "bdev_lvol_delete", 00:05:11.422 "bdev_lvol_set_read_only", 00:05:11.422 "bdev_lvol_resize", 00:05:11.422 "bdev_lvol_decouple_parent", 00:05:11.422 "bdev_lvol_inflate", 00:05:11.422 "bdev_lvol_rename", 00:05:11.422 "bdev_lvol_clone_bdev", 00:05:11.422 "bdev_lvol_clone", 00:05:11.422 "bdev_lvol_snapshot", 00:05:11.422 "bdev_lvol_create", 00:05:11.422 "bdev_lvol_delete_lvstore", 00:05:11.422 "bdev_lvol_rename_lvstore", 00:05:11.422 "bdev_lvol_create_lvstore", 00:05:11.422 "bdev_raid_set_options", 00:05:11.422 "bdev_raid_remove_base_bdev", 00:05:11.422 "bdev_raid_add_base_bdev", 00:05:11.422 "bdev_raid_delete", 00:05:11.422 "bdev_raid_create", 00:05:11.422 "bdev_raid_get_bdevs", 00:05:11.422 "bdev_error_inject_error", 00:05:11.422 "bdev_error_delete", 00:05:11.422 "bdev_error_create", 00:05:11.422 "bdev_split_delete", 00:05:11.422 "bdev_split_create", 00:05:11.422 "bdev_delay_delete", 00:05:11.422 "bdev_delay_create", 00:05:11.422 "bdev_delay_update_latency", 00:05:11.422 "bdev_zone_block_delete", 00:05:11.422 "bdev_zone_block_create", 00:05:11.422 "blobfs_create", 00:05:11.422 "blobfs_detect", 00:05:11.422 "blobfs_set_cache_size", 00:05:11.422 "bdev_aio_delete", 00:05:11.422 "bdev_aio_rescan", 00:05:11.422 "bdev_aio_create", 00:05:11.422 "bdev_ftl_set_property", 00:05:11.422 "bdev_ftl_get_properties", 00:05:11.422 "bdev_ftl_get_stats", 00:05:11.422 "bdev_ftl_unmap", 00:05:11.422 "bdev_ftl_unload", 00:05:11.422 "bdev_ftl_delete", 00:05:11.422 "bdev_ftl_load", 00:05:11.422 "bdev_ftl_create", 00:05:11.422 "bdev_virtio_attach_controller", 00:05:11.422 "bdev_virtio_scsi_get_devices", 00:05:11.422 "bdev_virtio_detach_controller", 00:05:11.422 "bdev_virtio_blk_set_hotplug", 00:05:11.422 "bdev_iscsi_delete", 00:05:11.422 "bdev_iscsi_create", 00:05:11.422 "bdev_iscsi_set_options", 00:05:11.422 "accel_error_inject_error", 00:05:11.422 "ioat_scan_accel_module", 00:05:11.422 "dsa_scan_accel_module", 00:05:11.422 "iaa_scan_accel_module", 00:05:11.422 "vfu_virtio_create_fs_endpoint", 00:05:11.422 "vfu_virtio_create_scsi_endpoint", 00:05:11.422 "vfu_virtio_scsi_remove_target", 00:05:11.423 "vfu_virtio_scsi_add_target", 00:05:11.423 "vfu_virtio_create_blk_endpoint", 00:05:11.423 "vfu_virtio_delete_endpoint", 00:05:11.423 "keyring_file_remove_key", 00:05:11.423 "keyring_file_add_key", 00:05:11.423 "keyring_linux_set_options", 00:05:11.423 "fsdev_aio_delete", 00:05:11.423 "fsdev_aio_create", 00:05:11.423 "iscsi_get_histogram", 00:05:11.423 "iscsi_enable_histogram", 00:05:11.423 "iscsi_set_options", 00:05:11.423 "iscsi_get_auth_groups", 00:05:11.423 "iscsi_auth_group_remove_secret", 00:05:11.423 "iscsi_auth_group_add_secret", 00:05:11.423 "iscsi_delete_auth_group", 00:05:11.423 "iscsi_create_auth_group", 00:05:11.423 "iscsi_set_discovery_auth", 00:05:11.423 "iscsi_get_options", 00:05:11.423 "iscsi_target_node_request_logout", 00:05:11.423 "iscsi_target_node_set_redirect", 00:05:11.423 "iscsi_target_node_set_auth", 00:05:11.423 "iscsi_target_node_add_lun", 00:05:11.423 "iscsi_get_stats", 00:05:11.423 "iscsi_get_connections", 00:05:11.423 "iscsi_portal_group_set_auth", 00:05:11.423 "iscsi_start_portal_group", 00:05:11.423 "iscsi_delete_portal_group", 00:05:11.423 "iscsi_create_portal_group", 00:05:11.423 "iscsi_get_portal_groups", 00:05:11.423 "iscsi_delete_target_node", 00:05:11.423 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.423 "iscsi_target_node_add_pg_ig_maps", 00:05:11.423 "iscsi_create_target_node", 00:05:11.423 "iscsi_get_target_nodes", 00:05:11.423 "iscsi_delete_initiator_group", 00:05:11.423 "iscsi_initiator_group_remove_initiators", 00:05:11.423 "iscsi_initiator_group_add_initiators", 00:05:11.423 "iscsi_create_initiator_group", 00:05:11.423 "iscsi_get_initiator_groups", 00:05:11.423 "nvmf_set_crdt", 00:05:11.423 "nvmf_set_config", 00:05:11.423 "nvmf_set_max_subsystems", 00:05:11.423 "nvmf_stop_mdns_prr", 00:05:11.423 "nvmf_publish_mdns_prr", 00:05:11.423 "nvmf_subsystem_get_listeners", 00:05:11.423 "nvmf_subsystem_get_qpairs", 00:05:11.423 "nvmf_subsystem_get_controllers", 00:05:11.423 "nvmf_get_stats", 00:05:11.423 "nvmf_get_transports", 00:05:11.423 "nvmf_create_transport", 00:05:11.423 "nvmf_get_targets", 00:05:11.423 "nvmf_delete_target", 00:05:11.423 "nvmf_create_target", 00:05:11.423 "nvmf_subsystem_allow_any_host", 00:05:11.423 "nvmf_subsystem_set_keys", 00:05:11.423 "nvmf_subsystem_remove_host", 00:05:11.423 "nvmf_subsystem_add_host", 00:05:11.423 "nvmf_ns_remove_host", 00:05:11.423 "nvmf_ns_add_host", 00:05:11.423 "nvmf_subsystem_remove_ns", 00:05:11.423 "nvmf_subsystem_set_ns_ana_group", 00:05:11.423 "nvmf_subsystem_add_ns", 00:05:11.423 "nvmf_subsystem_listener_set_ana_state", 00:05:11.423 "nvmf_discovery_get_referrals", 00:05:11.423 "nvmf_discovery_remove_referral", 00:05:11.423 "nvmf_discovery_add_referral", 00:05:11.423 "nvmf_subsystem_remove_listener", 00:05:11.423 "nvmf_subsystem_add_listener", 00:05:11.423 "nvmf_delete_subsystem", 00:05:11.423 "nvmf_create_subsystem", 00:05:11.423 "nvmf_get_subsystems", 00:05:11.423 "env_dpdk_get_mem_stats", 00:05:11.423 "nbd_get_disks", 00:05:11.423 "nbd_stop_disk", 00:05:11.423 "nbd_start_disk", 00:05:11.423 "ublk_recover_disk", 00:05:11.423 "ublk_get_disks", 00:05:11.423 "ublk_stop_disk", 00:05:11.423 "ublk_start_disk", 00:05:11.423 "ublk_destroy_target", 00:05:11.423 "ublk_create_target", 00:05:11.423 "virtio_blk_create_transport", 00:05:11.423 "virtio_blk_get_transports", 00:05:11.423 "vhost_controller_set_coalescing", 00:05:11.423 "vhost_get_controllers", 00:05:11.423 "vhost_delete_controller", 00:05:11.423 "vhost_create_blk_controller", 00:05:11.423 "vhost_scsi_controller_remove_target", 00:05:11.423 "vhost_scsi_controller_add_target", 00:05:11.423 "vhost_start_scsi_controller", 00:05:11.423 "vhost_create_scsi_controller", 00:05:11.423 "thread_set_cpumask", 00:05:11.423 "scheduler_set_options", 00:05:11.423 "framework_get_governor", 00:05:11.423 "framework_get_scheduler", 00:05:11.423 "framework_set_scheduler", 00:05:11.423 "framework_get_reactors", 00:05:11.423 "thread_get_io_channels", 00:05:11.423 "thread_get_pollers", 00:05:11.423 "thread_get_stats", 00:05:11.423 "framework_monitor_context_switch", 00:05:11.423 "spdk_kill_instance", 00:05:11.423 "log_enable_timestamps", 00:05:11.423 "log_get_flags", 00:05:11.423 "log_clear_flag", 00:05:11.423 "log_set_flag", 00:05:11.423 "log_get_level", 00:05:11.423 "log_set_level", 00:05:11.423 "log_get_print_level", 00:05:11.423 "log_set_print_level", 00:05:11.423 "framework_enable_cpumask_locks", 00:05:11.423 "framework_disable_cpumask_locks", 00:05:11.423 "framework_wait_init", 00:05:11.423 "framework_start_init", 00:05:11.423 "scsi_get_devices", 00:05:11.423 "bdev_get_histogram", 00:05:11.423 "bdev_enable_histogram", 00:05:11.423 "bdev_set_qos_limit", 00:05:11.423 "bdev_set_qd_sampling_period", 00:05:11.423 "bdev_get_bdevs", 00:05:11.423 "bdev_reset_iostat", 00:05:11.423 "bdev_get_iostat", 00:05:11.423 "bdev_examine", 00:05:11.423 "bdev_wait_for_examine", 00:05:11.423 "bdev_set_options", 00:05:11.423 "accel_get_stats", 00:05:11.423 "accel_set_options", 00:05:11.423 "accel_set_driver", 00:05:11.423 "accel_crypto_key_destroy", 00:05:11.423 "accel_crypto_keys_get", 00:05:11.423 "accel_crypto_key_create", 00:05:11.423 "accel_assign_opc", 00:05:11.423 "accel_get_module_info", 00:05:11.423 "accel_get_opc_assignments", 00:05:11.423 "vmd_rescan", 00:05:11.423 "vmd_remove_device", 00:05:11.423 "vmd_enable", 00:05:11.423 "sock_get_default_impl", 00:05:11.423 "sock_set_default_impl", 00:05:11.423 "sock_impl_set_options", 00:05:11.423 "sock_impl_get_options", 00:05:11.423 "iobuf_get_stats", 00:05:11.423 "iobuf_set_options", 00:05:11.423 "keyring_get_keys", 00:05:11.423 "vfu_tgt_set_base_path", 00:05:11.423 "framework_get_pci_devices", 00:05:11.423 "framework_get_config", 00:05:11.423 "framework_get_subsystems", 00:05:11.423 "fsdev_set_opts", 00:05:11.423 "fsdev_get_opts", 00:05:11.423 "trace_get_info", 00:05:11.423 "trace_get_tpoint_group_mask", 00:05:11.423 "trace_disable_tpoint_group", 00:05:11.423 "trace_enable_tpoint_group", 00:05:11.423 "trace_clear_tpoint_mask", 00:05:11.423 "trace_set_tpoint_mask", 00:05:11.423 "notify_get_notifications", 00:05:11.423 "notify_get_types", 00:05:11.423 "spdk_get_version", 00:05:11.423 "rpc_get_methods" 00:05:11.423 ] 00:05:11.423 19:12:35 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.423 19:12:35 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.423 19:12:35 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1912217 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1912217 ']' 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1912217 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.423 19:12:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912217 00:05:11.683 19:12:35 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.683 19:12:35 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.683 19:12:35 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912217' 00:05:11.683 killing process with pid 1912217 00:05:11.683 19:12:35 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1912217 00:05:11.683 19:12:35 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1912217 00:05:11.942 00:05:11.942 real 0m1.129s 00:05:11.942 user 0m1.884s 00:05:11.942 sys 0m0.432s 00:05:11.942 19:12:35 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.942 19:12:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.942 ************************************ 00:05:11.942 END TEST spdkcli_tcp 00:05:11.942 ************************************ 00:05:11.942 19:12:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.942 19:12:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.942 19:12:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.942 19:12:35 -- common/autotest_common.sh@10 -- # set +x 00:05:11.942 ************************************ 00:05:11.942 START TEST dpdk_mem_utility 00:05:11.942 ************************************ 00:05:11.942 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:11.942 * Looking for test storage... 00:05:11.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:11.942 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:11.942 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:11.942 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.201 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.201 19:12:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.201 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.201 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.201 --rc genhtml_branch_coverage=1 00:05:12.201 --rc genhtml_function_coverage=1 00:05:12.201 --rc genhtml_legend=1 00:05:12.201 --rc geninfo_all_blocks=1 00:05:12.201 --rc geninfo_unexecuted_blocks=1 00:05:12.201 00:05:12.201 ' 00:05:12.201 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.201 --rc genhtml_branch_coverage=1 00:05:12.201 --rc genhtml_function_coverage=1 00:05:12.201 --rc genhtml_legend=1 00:05:12.201 --rc geninfo_all_blocks=1 00:05:12.201 --rc geninfo_unexecuted_blocks=1 00:05:12.201 00:05:12.201 ' 00:05:12.201 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.201 --rc genhtml_branch_coverage=1 00:05:12.201 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.202 --rc genhtml_branch_coverage=1 00:05:12.202 --rc genhtml_function_coverage=1 00:05:12.202 --rc genhtml_legend=1 00:05:12.202 --rc geninfo_all_blocks=1 00:05:12.202 --rc geninfo_unexecuted_blocks=1 00:05:12.202 00:05:12.202 ' 00:05:12.202 19:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:12.202 19:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1912416 00:05:12.202 19:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1912416 00:05:12.202 19:12:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1912416 ']' 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.202 19:12:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 [2024-10-17 19:12:35.830972] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:12.202 [2024-10-17 19:12:35.831026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912416 ] 00:05:12.202 [2024-10-17 19:12:35.908484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.202 [2024-10-17 19:12:35.952509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.140 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.140 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.140 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.140 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.140 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.140 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.140 { 00:05:13.140 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.140 } 00:05:13.140 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.140 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:13.140 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:13.140 1 heaps totaling size 810.000000 MiB 00:05:13.140 size: 810.000000 MiB heap id: 0 00:05:13.140 end heaps---------- 00:05:13.140 9 mempools totaling size 595.772034 MiB 00:05:13.140 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.140 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.140 size: 92.545471 MiB name: bdev_io_1912416 00:05:13.140 size: 50.003479 MiB name: msgpool_1912416 00:05:13.140 size: 36.509338 MiB name: fsdev_io_1912416 00:05:13.140 size: 21.763794 MiB name: PDU_Pool 00:05:13.140 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.140 size: 4.133484 MiB name: evtpool_1912416 00:05:13.140 size: 0.026123 MiB name: Session_Pool 00:05:13.140 end mempools------- 00:05:13.140 6 memzones totaling size 4.142822 MiB 00:05:13.140 size: 1.000366 MiB name: RG_ring_0_1912416 00:05:13.140 size: 1.000366 MiB name: RG_ring_1_1912416 00:05:13.140 size: 1.000366 MiB name: RG_ring_4_1912416 00:05:13.140 size: 1.000366 MiB name: RG_ring_5_1912416 00:05:13.140 size: 0.125366 MiB name: RG_ring_2_1912416 00:05:13.140 size: 0.015991 MiB name: RG_ring_3_1912416 00:05:13.140 end memzones------- 00:05:13.140 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.140 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:13.140 list of free elements. size: 10.862488 MiB 00:05:13.140 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:13.140 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:13.140 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:13.140 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:13.140 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:13.140 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:13.140 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:13.140 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:13.140 element at address: 0x20001a600000 with size: 0.582886 MiB 00:05:13.140 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:13.140 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:13.140 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:13.140 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:13.140 element at address: 0x200027a00000 with size: 0.410034 MiB 00:05:13.140 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:13.140 list of standard malloc elements. size: 199.218628 MiB 00:05:13.140 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:13.140 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:13.140 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:13.140 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:13.140 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:13.140 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:13.140 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:13.140 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:13.140 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:13.140 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:13.140 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:13.140 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200027a69040 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:13.140 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:13.140 list of memzone associated elements. size: 599.918884 MiB 00:05:13.140 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:13.140 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.140 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:13.140 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.140 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:13.140 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1912416_0 00:05:13.140 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:13.140 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1912416_0 00:05:13.140 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:13.140 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1912416_0 00:05:13.140 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:13.140 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.140 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:13.140 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.140 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:13.140 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1912416_0 00:05:13.140 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:13.140 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1912416 00:05:13.140 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:13.140 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1912416 00:05:13.140 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:13.140 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.140 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:13.140 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.140 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:13.140 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.140 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:13.140 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.140 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:13.140 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1912416 00:05:13.140 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:13.141 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1912416 00:05:13.141 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:13.141 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1912416 00:05:13.141 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:13.141 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1912416 00:05:13.141 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:13.141 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1912416 00:05:13.141 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:13.141 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1912416 00:05:13.141 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:13.141 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.141 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:13.141 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.141 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:13.141 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.141 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:13.141 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1912416 00:05:13.141 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:13.141 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1912416 00:05:13.141 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:13.141 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.141 element at address: 0x200027a69100 with size: 0.023743 MiB 00:05:13.141 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.141 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:13.141 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1912416 00:05:13.141 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:05:13.141 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.141 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:13.141 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1912416 00:05:13.141 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:13.141 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1912416 00:05:13.141 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:13.141 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1912416 00:05:13.141 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:05:13.141 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.141 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.141 19:12:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1912416 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1912416 ']' 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1912416 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1912416 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1912416' 00:05:13.141 killing process with pid 1912416 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1912416 00:05:13.141 19:12:36 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1912416 00:05:13.400 00:05:13.400 real 0m1.509s 00:05:13.400 user 0m1.591s 00:05:13.400 sys 0m0.441s 00:05:13.400 19:12:37 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.400 19:12:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.400 ************************************ 00:05:13.400 END TEST dpdk_mem_utility 00:05:13.400 ************************************ 00:05:13.400 19:12:37 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:13.400 19:12:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.400 19:12:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.400 19:12:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.659 ************************************ 00:05:13.659 START TEST event 00:05:13.659 ************************************ 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:13.659 * Looking for test storage... 00:05:13.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.659 19:12:37 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.659 19:12:37 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.659 19:12:37 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.659 19:12:37 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.659 19:12:37 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.659 19:12:37 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.659 19:12:37 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.659 19:12:37 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.659 19:12:37 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.659 19:12:37 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.659 19:12:37 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.659 19:12:37 event -- scripts/common.sh@344 -- # case "$op" in 00:05:13.659 19:12:37 event -- scripts/common.sh@345 -- # : 1 00:05:13.659 19:12:37 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.659 19:12:37 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.659 19:12:37 event -- scripts/common.sh@365 -- # decimal 1 00:05:13.659 19:12:37 event -- scripts/common.sh@353 -- # local d=1 00:05:13.659 19:12:37 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.659 19:12:37 event -- scripts/common.sh@355 -- # echo 1 00:05:13.659 19:12:37 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.659 19:12:37 event -- scripts/common.sh@366 -- # decimal 2 00:05:13.659 19:12:37 event -- scripts/common.sh@353 -- # local d=2 00:05:13.659 19:12:37 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.659 19:12:37 event -- scripts/common.sh@355 -- # echo 2 00:05:13.659 19:12:37 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.659 19:12:37 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.659 19:12:37 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.659 19:12:37 event -- scripts/common.sh@368 -- # return 0 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.659 --rc genhtml_branch_coverage=1 00:05:13.659 --rc genhtml_function_coverage=1 00:05:13.659 --rc genhtml_legend=1 00:05:13.659 --rc geninfo_all_blocks=1 00:05:13.659 --rc geninfo_unexecuted_blocks=1 00:05:13.659 00:05:13.659 ' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.659 --rc genhtml_branch_coverage=1 00:05:13.659 --rc genhtml_function_coverage=1 00:05:13.659 --rc genhtml_legend=1 00:05:13.659 --rc geninfo_all_blocks=1 00:05:13.659 --rc geninfo_unexecuted_blocks=1 00:05:13.659 00:05:13.659 ' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.659 --rc genhtml_branch_coverage=1 00:05:13.659 --rc genhtml_function_coverage=1 00:05:13.659 --rc genhtml_legend=1 00:05:13.659 --rc geninfo_all_blocks=1 00:05:13.659 --rc geninfo_unexecuted_blocks=1 00:05:13.659 00:05:13.659 ' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.659 --rc genhtml_branch_coverage=1 00:05:13.659 --rc genhtml_function_coverage=1 00:05:13.659 --rc genhtml_legend=1 00:05:13.659 --rc geninfo_all_blocks=1 00:05:13.659 --rc geninfo_unexecuted_blocks=1 00:05:13.659 00:05:13.659 ' 00:05:13.659 19:12:37 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:13.659 19:12:37 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:13.659 19:12:37 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:13.659 19:12:37 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.659 19:12:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.659 ************************************ 00:05:13.659 START TEST event_perf 00:05:13.659 ************************************ 00:05:13.659 19:12:37 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:13.659 Running I/O for 1 seconds...[2024-10-17 19:12:37.417657] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:13.659 [2024-10-17 19:12:37.417728] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1912820 ] 00:05:13.917 [2024-10-17 19:12:37.496853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:13.917 [2024-10-17 19:12:37.540452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.917 [2024-10-17 19:12:37.540563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.917 [2024-10-17 19:12:37.540644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.917 [2024-10-17 19:12:37.540644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.854 Running I/O for 1 seconds... 00:05:14.854 lcore 0: 204010 00:05:14.854 lcore 1: 204009 00:05:14.854 lcore 2: 204009 00:05:14.854 lcore 3: 204010 00:05:14.854 done. 00:05:14.854 00:05:14.854 real 0m1.183s 00:05:14.854 user 0m4.097s 00:05:14.854 sys 0m0.084s 00:05:14.854 19:12:38 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.854 19:12:38 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.854 ************************************ 00:05:14.854 END TEST event_perf 00:05:14.854 ************************************ 00:05:14.854 19:12:38 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:14.854 19:12:38 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:14.854 19:12:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.854 19:12:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.113 ************************************ 00:05:15.113 START TEST event_reactor 00:05:15.113 ************************************ 00:05:15.113 19:12:38 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:15.113 [2024-10-17 19:12:38.672899] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:15.113 [2024-10-17 19:12:38.672974] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913026 ] 00:05:15.113 [2024-10-17 19:12:38.750685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.113 [2024-10-17 19:12:38.790285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.050 test_start 00:05:16.050 oneshot 00:05:16.050 tick 100 00:05:16.050 tick 100 00:05:16.050 tick 250 00:05:16.050 tick 100 00:05:16.050 tick 100 00:05:16.050 tick 100 00:05:16.050 tick 250 00:05:16.050 tick 500 00:05:16.050 tick 100 00:05:16.050 tick 100 00:05:16.050 tick 250 00:05:16.050 tick 100 00:05:16.050 tick 100 00:05:16.050 test_end 00:05:16.050 00:05:16.050 real 0m1.177s 00:05:16.050 user 0m1.104s 00:05:16.050 sys 0m0.070s 00:05:16.050 19:12:39 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:16.050 19:12:39 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:16.050 ************************************ 00:05:16.050 END TEST event_reactor 00:05:16.050 ************************************ 00:05:16.309 19:12:39 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.309 19:12:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:16.309 19:12:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:16.309 19:12:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.309 ************************************ 00:05:16.309 START TEST event_reactor_perf 00:05:16.309 ************************************ 00:05:16.309 19:12:39 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:16.309 [2024-10-17 19:12:39.921563] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:16.309 [2024-10-17 19:12:39.921649] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913194 ] 00:05:16.309 [2024-10-17 19:12:40.003074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.309 [2024-10-17 19:12:40.051342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.765 test_start 00:05:17.765 test_end 00:05:17.765 Performance: 500728 events per second 00:05:17.765 00:05:17.765 real 0m1.191s 00:05:17.765 user 0m1.104s 00:05:17.765 sys 0m0.083s 00:05:17.765 19:12:41 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.765 19:12:41 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.765 ************************************ 00:05:17.765 END TEST event_reactor_perf 00:05:17.765 ************************************ 00:05:17.765 19:12:41 event -- event/event.sh@49 -- # uname -s 00:05:17.765 19:12:41 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:17.765 19:12:41 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.765 19:12:41 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.765 19:12:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.765 19:12:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.765 ************************************ 00:05:17.765 START TEST event_scheduler 00:05:17.765 ************************************ 00:05:17.765 19:12:41 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:17.765 * Looking for test storage... 00:05:17.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:17.765 19:12:41 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:17.765 19:12:41 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:17.765 19:12:41 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:17.765 19:12:41 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.765 19:12:41 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.766 19:12:41 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.766 --rc genhtml_branch_coverage=1 00:05:17.766 --rc genhtml_function_coverage=1 00:05:17.766 --rc genhtml_legend=1 00:05:17.766 --rc geninfo_all_blocks=1 00:05:17.766 --rc geninfo_unexecuted_blocks=1 00:05:17.766 00:05:17.766 ' 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.766 --rc genhtml_branch_coverage=1 00:05:17.766 --rc genhtml_function_coverage=1 00:05:17.766 --rc genhtml_legend=1 00:05:17.766 --rc geninfo_all_blocks=1 00:05:17.766 --rc geninfo_unexecuted_blocks=1 00:05:17.766 00:05:17.766 ' 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.766 --rc genhtml_branch_coverage=1 00:05:17.766 --rc genhtml_function_coverage=1 00:05:17.766 --rc genhtml_legend=1 00:05:17.766 --rc geninfo_all_blocks=1 00:05:17.766 --rc geninfo_unexecuted_blocks=1 00:05:17.766 00:05:17.766 ' 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:17.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.766 --rc genhtml_branch_coverage=1 00:05:17.766 --rc genhtml_function_coverage=1 00:05:17.766 --rc genhtml_legend=1 00:05:17.766 --rc geninfo_all_blocks=1 00:05:17.766 --rc geninfo_unexecuted_blocks=1 00:05:17.766 00:05:17.766 ' 00:05:17.766 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:17.766 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1913512 00:05:17.766 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.766 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:17.766 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1913512 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1913512 ']' 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.766 19:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.766 [2024-10-17 19:12:41.394255] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:17.766 [2024-10-17 19:12:41.394304] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913512 ] 00:05:17.766 [2024-10-17 19:12:41.469309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:17.766 [2024-10-17 19:12:41.514165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.766 [2024-10-17 19:12:41.514272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.766 [2024-10-17 19:12:41.514379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.766 [2024-10-17 19:12:41.514380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:18.031 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 [2024-10-17 19:12:41.550922] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:18.031 [2024-10-17 19:12:41.550940] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:18.031 [2024-10-17 19:12:41.550949] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:18.031 [2024-10-17 19:12:41.550954] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:18.031 [2024-10-17 19:12:41.550959] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 [2024-10-17 19:12:41.623994] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 ************************************ 00:05:18.031 START TEST scheduler_create_thread 00:05:18.031 ************************************ 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 2 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 3 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 4 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 5 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 6 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 7 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 8 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 9 00:05:18.031 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.032 10 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.032 19:12:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.969 19:12:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.969 19:12:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:18.969 19:12:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.969 19:12:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.348 19:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.348 19:12:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:20.348 19:12:44 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:20.348 19:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.348 19:12:44 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.286 19:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.286 00:05:21.286 real 0m3.383s 00:05:21.286 user 0m0.022s 00:05:21.286 sys 0m0.007s 00:05:21.286 19:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:21.286 19:12:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.286 ************************************ 00:05:21.286 END TEST scheduler_create_thread 00:05:21.286 ************************************ 00:05:21.545 19:12:45 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:21.545 19:12:45 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1913512 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1913512 ']' 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1913512 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1913512 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1913512' 00:05:21.545 killing process with pid 1913512 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1913512 00:05:21.545 19:12:45 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1913512 00:05:21.804 [2024-10-17 19:12:45.423954] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:22.064 00:05:22.064 real 0m4.456s 00:05:22.064 user 0m7.767s 00:05:22.064 sys 0m0.376s 00:05:22.064 19:12:45 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.064 19:12:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 ************************************ 00:05:22.064 END TEST event_scheduler 00:05:22.064 ************************************ 00:05:22.064 19:12:45 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:22.064 19:12:45 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:22.064 19:12:45 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.064 19:12:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.064 19:12:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 ************************************ 00:05:22.064 START TEST app_repeat 00:05:22.064 ************************************ 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1914356 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1914356' 00:05:22.064 Process app_repeat pid: 1914356 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:22.064 spdk_app_start Round 0 00:05:22.064 19:12:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1914356 /var/tmp/spdk-nbd.sock 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1914356 ']' 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.064 19:12:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.064 [2024-10-17 19:12:45.739275] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:22.064 [2024-10-17 19:12:45.739329] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1914356 ] 00:05:22.064 [2024-10-17 19:12:45.814058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.323 [2024-10-17 19:12:45.855660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.323 [2024-10-17 19:12:45.855660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.323 19:12:45 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.323 19:12:45 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:22.323 19:12:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.583 Malloc0 00:05:22.583 19:12:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.583 Malloc1 00:05:22.842 19:12:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.842 /dev/nbd0 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.842 1+0 records in 00:05:22.842 1+0 records out 00:05:22.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225181 s, 18.2 MB/s 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:22.842 19:12:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.842 19:12:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.102 /dev/nbd1 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.102 1+0 records in 00:05:23.102 1+0 records out 00:05:23.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212667 s, 19.3 MB/s 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:23.102 19:12:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.102 19:12:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.362 { 00:05:23.362 "nbd_device": "/dev/nbd0", 00:05:23.362 "bdev_name": "Malloc0" 00:05:23.362 }, 00:05:23.362 { 00:05:23.362 "nbd_device": "/dev/nbd1", 00:05:23.362 "bdev_name": "Malloc1" 00:05:23.362 } 00:05:23.362 ]' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.362 { 00:05:23.362 "nbd_device": "/dev/nbd0", 00:05:23.362 "bdev_name": "Malloc0" 00:05:23.362 }, 00:05:23.362 { 00:05:23.362 "nbd_device": "/dev/nbd1", 00:05:23.362 "bdev_name": "Malloc1" 00:05:23.362 } 00:05:23.362 ]' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.362 /dev/nbd1' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.362 /dev/nbd1' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.362 256+0 records in 00:05:23.362 256+0 records out 00:05:23.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104396 s, 100 MB/s 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.362 256+0 records in 00:05:23.362 256+0 records out 00:05:23.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137024 s, 76.5 MB/s 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.362 19:12:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.621 256+0 records in 00:05:23.621 256+0 records out 00:05:23.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146591 s, 71.5 MB/s 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.621 19:12:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.879 19:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.138 19:12:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.138 19:12:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.397 19:12:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:24.657 [2024-10-17 19:12:48.198648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:24.657 [2024-10-17 19:12:48.237644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.657 [2024-10-17 19:12:48.237645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.657 [2024-10-17 19:12:48.278295] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:24.657 [2024-10-17 19:12:48.278329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.947 19:12:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.947 19:12:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:27.947 spdk_app_start Round 1 00:05:27.947 19:12:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1914356 /var/tmp/spdk-nbd.sock 00:05:27.947 19:12:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1914356 ']' 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.948 19:12:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.948 19:12:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.948 Malloc0 00:05:27.948 19:12:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.948 Malloc1 00:05:27.948 19:12:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.948 19:12:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.207 /dev/nbd0 00:05:28.207 19:12:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.207 19:12:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.207 1+0 records in 00:05:28.207 1+0 records out 00:05:28.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.4216e-05 s, 43.5 MB/s 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.207 19:12:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.207 19:12:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.207 19:12:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.207 19:12:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.466 /dev/nbd1 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.466 1+0 records in 00:05:28.466 1+0 records out 00:05:28.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215359 s, 19.0 MB/s 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.466 19:12:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.466 19:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.725 { 00:05:28.725 "nbd_device": "/dev/nbd0", 00:05:28.725 "bdev_name": "Malloc0" 00:05:28.725 }, 00:05:28.725 { 00:05:28.725 "nbd_device": "/dev/nbd1", 00:05:28.725 "bdev_name": "Malloc1" 00:05:28.725 } 00:05:28.725 ]' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.725 { 00:05:28.725 "nbd_device": "/dev/nbd0", 00:05:28.725 "bdev_name": "Malloc0" 00:05:28.725 }, 00:05:28.725 { 00:05:28.725 "nbd_device": "/dev/nbd1", 00:05:28.725 "bdev_name": "Malloc1" 00:05:28.725 } 00:05:28.725 ]' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.725 /dev/nbd1' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.725 /dev/nbd1' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.725 19:12:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.726 256+0 records in 00:05:28.726 256+0 records out 00:05:28.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354047 s, 296 MB/s 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.726 256+0 records in 00:05:28.726 256+0 records out 00:05:28.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138481 s, 75.7 MB/s 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.726 256+0 records in 00:05:28.726 256+0 records out 00:05:28.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146483 s, 71.6 MB/s 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.726 19:12:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.985 19:12:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.244 19:12:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.245 19:12:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.504 19:12:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.504 19:12:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.763 19:12:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.763 [2024-10-17 19:12:53.471831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.763 [2024-10-17 19:12:53.508310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.763 [2024-10-17 19:12:53.508310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.022 [2024-10-17 19:12:53.549826] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.022 [2024-10-17 19:12:53.549866] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.556 19:12:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.556 19:12:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:32.556 spdk_app_start Round 2 00:05:32.556 19:12:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1914356 /var/tmp/spdk-nbd.sock 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1914356 ']' 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.556 19:12:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.815 19:12:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.815 19:12:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.815 19:12:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.074 Malloc0 00:05:33.074 19:12:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.333 Malloc1 00:05:33.333 19:12:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.333 19:12:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.592 /dev/nbd0 00:05:33.592 19:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.592 19:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.592 1+0 records in 00:05:33.592 1+0 records out 00:05:33.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184974 s, 22.1 MB/s 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.592 19:12:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.592 19:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.592 19:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.592 19:12:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.851 /dev/nbd1 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.851 1+0 records in 00:05:33.851 1+0 records out 00:05:33.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171368 s, 23.9 MB/s 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.851 19:12:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.851 19:12:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.112 { 00:05:34.112 "nbd_device": "/dev/nbd0", 00:05:34.112 "bdev_name": "Malloc0" 00:05:34.112 }, 00:05:34.112 { 00:05:34.112 "nbd_device": "/dev/nbd1", 00:05:34.112 "bdev_name": "Malloc1" 00:05:34.112 } 00:05:34.112 ]' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.112 { 00:05:34.112 "nbd_device": "/dev/nbd0", 00:05:34.112 "bdev_name": "Malloc0" 00:05:34.112 }, 00:05:34.112 { 00:05:34.112 "nbd_device": "/dev/nbd1", 00:05:34.112 "bdev_name": "Malloc1" 00:05:34.112 } 00:05:34.112 ]' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.112 /dev/nbd1' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.112 /dev/nbd1' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.112 256+0 records in 00:05:34.112 256+0 records out 00:05:34.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109004 s, 96.2 MB/s 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.112 256+0 records in 00:05:34.112 256+0 records out 00:05:34.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137892 s, 76.0 MB/s 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.112 256+0 records in 00:05:34.112 256+0 records out 00:05:34.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014824 s, 70.7 MB/s 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.112 19:12:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.372 19:12:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.631 19:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.890 19:12:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.890 19:12:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.890 19:12:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.149 [2024-10-17 19:12:58.792723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.149 [2024-10-17 19:12:58.828928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.149 [2024-10-17 19:12:58.828929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.149 [2024-10-17 19:12:58.869550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.149 [2024-10-17 19:12:58.869591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.439 19:13:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1914356 /var/tmp/spdk-nbd.sock 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1914356 ']' 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:38.439 19:13:01 event.app_repeat -- event/event.sh@39 -- # killprocess 1914356 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1914356 ']' 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1914356 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1914356 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1914356' 00:05:38.439 killing process with pid 1914356 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1914356 00:05:38.439 19:13:01 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1914356 00:05:38.439 spdk_app_start is called in Round 0. 00:05:38.439 Shutdown signal received, stop current app iteration 00:05:38.439 Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 reinitialization... 00:05:38.439 spdk_app_start is called in Round 1. 00:05:38.439 Shutdown signal received, stop current app iteration 00:05:38.439 Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 reinitialization... 00:05:38.439 spdk_app_start is called in Round 2. 00:05:38.439 Shutdown signal received, stop current app iteration 00:05:38.439 Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 reinitialization... 00:05:38.439 spdk_app_start is called in Round 3. 00:05:38.439 Shutdown signal received, stop current app iteration 00:05:38.439 19:13:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:38.439 19:13:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:38.439 00:05:38.439 real 0m16.348s 00:05:38.439 user 0m35.864s 00:05:38.439 sys 0m2.540s 00:05:38.439 19:13:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.439 19:13:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.439 ************************************ 00:05:38.439 END TEST app_repeat 00:05:38.439 ************************************ 00:05:38.439 19:13:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:38.439 19:13:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.439 19:13:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.439 19:13:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.439 19:13:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.439 ************************************ 00:05:38.439 START TEST cpu_locks 00:05:38.439 ************************************ 00:05:38.439 19:13:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:38.439 * Looking for test storage... 00:05:38.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:38.439 19:13:02 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:38.439 19:13:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:38.439 19:13:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.699 19:13:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:38.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.699 --rc genhtml_branch_coverage=1 00:05:38.699 --rc genhtml_function_coverage=1 00:05:38.699 --rc genhtml_legend=1 00:05:38.699 --rc geninfo_all_blocks=1 00:05:38.699 --rc geninfo_unexecuted_blocks=1 00:05:38.699 00:05:38.699 ' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:38.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.699 --rc genhtml_branch_coverage=1 00:05:38.699 --rc genhtml_function_coverage=1 00:05:38.699 --rc genhtml_legend=1 00:05:38.699 --rc geninfo_all_blocks=1 00:05:38.699 --rc geninfo_unexecuted_blocks=1 00:05:38.699 00:05:38.699 ' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:38.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.699 --rc genhtml_branch_coverage=1 00:05:38.699 --rc genhtml_function_coverage=1 00:05:38.699 --rc genhtml_legend=1 00:05:38.699 --rc geninfo_all_blocks=1 00:05:38.699 --rc geninfo_unexecuted_blocks=1 00:05:38.699 00:05:38.699 ' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:38.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.699 --rc genhtml_branch_coverage=1 00:05:38.699 --rc genhtml_function_coverage=1 00:05:38.699 --rc genhtml_legend=1 00:05:38.699 --rc geninfo_all_blocks=1 00:05:38.699 --rc geninfo_unexecuted_blocks=1 00:05:38.699 00:05:38.699 ' 00:05:38.699 19:13:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:38.699 19:13:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:38.699 19:13:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:38.699 19:13:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.699 19:13:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.699 ************************************ 00:05:38.699 START TEST default_locks 00:05:38.699 ************************************ 00:05:38.699 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:38.699 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1917353 00:05:38.699 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1917353 00:05:38.699 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.699 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1917353 ']' 00:05:38.700 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.700 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:38.700 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.700 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:38.700 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.700 [2024-10-17 19:13:02.383278] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:38.700 [2024-10-17 19:13:02.383316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917353 ] 00:05:38.700 [2024-10-17 19:13:02.457228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.959 [2024-10-17 19:13:02.497294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.959 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.959 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:38.959 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1917353 00:05:38.959 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1917353 00:05:38.959 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.218 lslocks: write error 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1917353 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1917353 ']' 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1917353 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1917353 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1917353' 00:05:39.218 killing process with pid 1917353 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1917353 00:05:39.218 19:13:02 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1917353 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1917353 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1917353 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1917353 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1917353 ']' 00:05:39.478 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1917353) - No such process 00:05:39.737 ERROR: process (pid: 1917353) is no longer running 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.737 00:05:39.737 real 0m0.938s 00:05:39.737 user 0m0.882s 00:05:39.737 sys 0m0.437s 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.737 19:13:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.737 ************************************ 00:05:39.737 END TEST default_locks 00:05:39.737 ************************************ 00:05:39.737 19:13:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:39.737 19:13:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.737 19:13:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.737 19:13:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.737 ************************************ 00:05:39.737 START TEST default_locks_via_rpc 00:05:39.737 ************************************ 00:05:39.737 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:39.737 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1917492 00:05:39.737 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1917492 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1917492 ']' 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.738 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.738 [2024-10-17 19:13:03.385157] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:39.738 [2024-10-17 19:13:03.385200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917492 ] 00:05:39.738 [2024-10-17 19:13:03.460625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.738 [2024-10-17 19:13:03.502648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1917492 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1917492 00:05:39.997 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1917492 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1917492 ']' 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1917492 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1917492 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1917492' 00:05:40.256 killing process with pid 1917492 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1917492 00:05:40.256 19:13:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1917492 00:05:40.515 00:05:40.515 real 0m0.894s 00:05:40.515 user 0m0.830s 00:05:40.515 sys 0m0.425s 00:05:40.515 19:13:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:40.515 19:13:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.515 ************************************ 00:05:40.515 END TEST default_locks_via_rpc 00:05:40.515 ************************************ 00:05:40.515 19:13:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:40.515 19:13:04 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:40.515 19:13:04 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:40.515 19:13:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.515 ************************************ 00:05:40.515 START TEST non_locking_app_on_locked_coremask 00:05:40.515 ************************************ 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1917649 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1917649 /var/tmp/spdk.sock 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1917649 ']' 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:40.515 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.774 [2024-10-17 19:13:04.344881] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:40.774 [2024-10-17 19:13:04.344922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917649 ] 00:05:40.774 [2024-10-17 19:13:04.416368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.774 [2024-10-17 19:13:04.458611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1917727 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1917727 /var/tmp/spdk2.sock 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1917727 ']' 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.033 19:13:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.033 [2024-10-17 19:13:04.724586] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:41.033 [2024-10-17 19:13:04.724647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917727 ] 00:05:41.033 [2024-10-17 19:13:04.813415] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.033 [2024-10-17 19:13:04.813441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.292 [2024-10-17 19:13:04.901167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.860 19:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.860 19:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:41.860 19:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1917649 00:05:41.860 19:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1917649 00:05:41.860 19:13:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.428 lslocks: write error 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1917649 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1917649 ']' 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1917649 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1917649 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1917649' 00:05:42.428 killing process with pid 1917649 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1917649 00:05:42.428 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1917649 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1917727 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1917727 ']' 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1917727 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.996 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1917727 00:05:43.255 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.255 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.255 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1917727' 00:05:43.255 killing process with pid 1917727 00:05:43.255 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1917727 00:05:43.255 19:13:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1917727 00:05:43.514 00:05:43.514 real 0m2.817s 00:05:43.514 user 0m2.968s 00:05:43.514 sys 0m0.934s 00:05:43.514 19:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.514 19:13:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 ************************************ 00:05:43.514 END TEST non_locking_app_on_locked_coremask 00:05:43.514 ************************************ 00:05:43.514 19:13:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:43.514 19:13:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.514 19:13:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.514 19:13:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 ************************************ 00:05:43.514 START TEST locking_app_on_unlocked_coremask 00:05:43.514 ************************************ 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1918154 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1918154 /var/tmp/spdk.sock 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1918154 ']' 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.514 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.514 [2024-10-17 19:13:07.234894] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:43.514 [2024-10-17 19:13:07.234941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918154 ] 00:05:43.773 [2024-10-17 19:13:07.310718] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.773 [2024-10-17 19:13:07.310745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.773 [2024-10-17 19:13:07.350165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1918315 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1918315 /var/tmp/spdk2.sock 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1918315 ']' 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.032 19:13:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.032 [2024-10-17 19:13:07.631288] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:44.032 [2024-10-17 19:13:07.631341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918315 ] 00:05:44.032 [2024-10-17 19:13:07.722723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.032 [2024-10-17 19:13:07.802634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.968 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.968 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:44.968 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1918315 00:05:44.968 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1918315 00:05:44.968 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.227 lslocks: write error 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1918154 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1918154 ']' 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1918154 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918154 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918154' 00:05:45.227 killing process with pid 1918154 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1918154 00:05:45.227 19:13:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1918154 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1918315 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1918315 ']' 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1918315 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.796 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918315 00:05:46.055 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.055 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.055 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918315' 00:05:46.055 killing process with pid 1918315 00:05:46.055 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1918315 00:05:46.055 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1918315 00:05:46.315 00:05:46.315 real 0m2.728s 00:05:46.315 user 0m2.867s 00:05:46.315 sys 0m0.920s 00:05:46.315 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.315 19:13:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.315 ************************************ 00:05:46.315 END TEST locking_app_on_unlocked_coremask 00:05:46.316 ************************************ 00:05:46.316 19:13:09 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:46.316 19:13:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.316 19:13:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.316 19:13:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.316 ************************************ 00:05:46.316 START TEST locking_app_on_locked_coremask 00:05:46.316 ************************************ 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1918649 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1918649 /var/tmp/spdk.sock 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1918649 ']' 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.316 19:13:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.316 [2024-10-17 19:13:10.027414] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:46.316 [2024-10-17 19:13:10.027456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918649 ] 00:05:46.575 [2024-10-17 19:13:10.103762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.575 [2024-10-17 19:13:10.145951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1918872 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1918872 /var/tmp/spdk2.sock 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1918872 /var/tmp/spdk2.sock 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.575 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1918872 /var/tmp/spdk2.sock 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1918872 ']' 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.834 19:13:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.834 [2024-10-17 19:13:10.390852] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:46.834 [2024-10-17 19:13:10.390897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1918872 ] 00:05:46.834 [2024-10-17 19:13:10.479033] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1918649 has claimed it. 00:05:46.834 [2024-10-17 19:13:10.479069] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1918872) - No such process 00:05:47.402 ERROR: process (pid: 1918872) is no longer running 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1918649 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1918649 00:05:47.402 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.661 lslocks: write error 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1918649 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1918649 ']' 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1918649 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1918649 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1918649' 00:05:47.661 killing process with pid 1918649 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1918649 00:05:47.661 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1918649 00:05:47.920 00:05:47.920 real 0m1.609s 00:05:47.920 user 0m1.719s 00:05:47.920 sys 0m0.524s 00:05:47.920 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.920 19:13:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.920 ************************************ 00:05:47.920 END TEST locking_app_on_locked_coremask 00:05:47.920 ************************************ 00:05:47.920 19:13:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:47.920 19:13:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.920 19:13:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.920 19:13:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.920 ************************************ 00:05:47.920 START TEST locking_overlapped_coremask 00:05:47.920 ************************************ 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1919080 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1919080 /var/tmp/spdk.sock 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1919080 ']' 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:47.920 19:13:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.920 [2024-10-17 19:13:11.696111] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:47.920 [2024-10-17 19:13:11.696149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919080 ] 00:05:48.180 [2024-10-17 19:13:11.770183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.180 [2024-10-17 19:13:11.814850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.180 [2024-10-17 19:13:11.814964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.180 [2024-10-17 19:13:11.814965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1919136 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1919136 /var/tmp/spdk2.sock 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1919136 /var/tmp/spdk2.sock 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1919136 /var/tmp/spdk2.sock 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1919136 ']' 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.439 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.439 [2024-10-17 19:13:12.075527] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:48.439 [2024-10-17 19:13:12.075576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919136 ] 00:05:48.439 [2024-10-17 19:13:12.167409] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1919080 has claimed it. 00:05:48.439 [2024-10-17 19:13:12.167444] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:49.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1919136) - No such process 00:05:49.007 ERROR: process (pid: 1919136) is no longer running 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1919080 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1919080 ']' 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1919080 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1919080 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1919080' 00:05:49.007 killing process with pid 1919080 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1919080 00:05:49.007 19:13:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1919080 00:05:49.576 00:05:49.576 real 0m1.416s 00:05:49.576 user 0m3.922s 00:05:49.576 sys 0m0.386s 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 ************************************ 00:05:49.576 END TEST locking_overlapped_coremask 00:05:49.576 ************************************ 00:05:49.576 19:13:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.576 19:13:13 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.576 19:13:13 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.576 19:13:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 ************************************ 00:05:49.576 START TEST locking_overlapped_coremask_via_rpc 00:05:49.576 ************************************ 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1919392 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1919392 /var/tmp/spdk.sock 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1919392 ']' 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.576 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.576 [2024-10-17 19:13:13.192461] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:49.576 [2024-10-17 19:13:13.192507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919392 ] 00:05:49.576 [2024-10-17 19:13:13.268154] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.576 [2024-10-17 19:13:13.268179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:49.576 [2024-10-17 19:13:13.310649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.576 [2024-10-17 19:13:13.310765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.576 [2024-10-17 19:13:13.310767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1919406 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1919406 /var/tmp/spdk2.sock 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1919406 ']' 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.836 19:13:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.836 [2024-10-17 19:13:13.583196] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:49.836 [2024-10-17 19:13:13.583240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919406 ] 00:05:50.095 [2024-10-17 19:13:13.675177] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.095 [2024-10-17 19:13:13.675207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.095 [2024-10-17 19:13:13.757624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.095 [2024-10-17 19:13:13.760650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.095 [2024-10-17 19:13:13.760651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.663 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.663 [2024-10-17 19:13:14.443675] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1919392 has claimed it. 00:05:50.922 request: 00:05:50.923 { 00:05:50.923 "method": "framework_enable_cpumask_locks", 00:05:50.923 "req_id": 1 00:05:50.923 } 00:05:50.923 Got JSON-RPC error response 00:05:50.923 response: 00:05:50.923 { 00:05:50.923 "code": -32603, 00:05:50.923 "message": "Failed to claim CPU core: 2" 00:05:50.923 } 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1919392 /var/tmp/spdk.sock 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1919392 ']' 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1919406 /var/tmp/spdk2.sock 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1919406 ']' 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.923 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:51.182 00:05:51.182 real 0m1.714s 00:05:51.182 user 0m0.815s 00:05:51.182 sys 0m0.140s 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.182 19:13:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.182 ************************************ 00:05:51.182 END TEST locking_overlapped_coremask_via_rpc 00:05:51.182 ************************************ 00:05:51.182 19:13:14 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:51.182 19:13:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1919392 ]] 00:05:51.182 19:13:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1919392 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1919392 ']' 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1919392 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1919392 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1919392' 00:05:51.182 killing process with pid 1919392 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1919392 00:05:51.182 19:13:14 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1919392 00:05:51.752 19:13:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1919406 ]] 00:05:51.752 19:13:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1919406 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1919406 ']' 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1919406 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1919406 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1919406' 00:05:51.752 killing process with pid 1919406 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1919406 00:05:51.752 19:13:15 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1919406 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1919392 ]] 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1919392 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1919392 ']' 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1919392 00:05:52.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1919392) - No such process 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1919392 is not found' 00:05:52.012 Process with pid 1919392 is not found 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1919406 ]] 00:05:52.012 19:13:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1919406 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1919406 ']' 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1919406 00:05:52.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1919406) - No such process 00:05:52.012 19:13:15 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1919406 is not found' 00:05:52.013 Process with pid 1919406 is not found 00:05:52.013 19:13:15 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:52.013 00:05:52.013 real 0m13.495s 00:05:52.013 user 0m23.746s 00:05:52.013 sys 0m4.745s 00:05:52.013 19:13:15 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.013 19:13:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.013 ************************************ 00:05:52.013 END TEST cpu_locks 00:05:52.013 ************************************ 00:05:52.013 00:05:52.013 real 0m38.468s 00:05:52.013 user 1m13.953s 00:05:52.013 sys 0m8.284s 00:05:52.013 19:13:15 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.013 19:13:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.013 ************************************ 00:05:52.013 END TEST event 00:05:52.013 ************************************ 00:05:52.013 19:13:15 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:52.013 19:13:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.013 19:13:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.013 19:13:15 -- common/autotest_common.sh@10 -- # set +x 00:05:52.013 ************************************ 00:05:52.013 START TEST thread 00:05:52.013 ************************************ 00:05:52.013 19:13:15 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:52.272 * Looking for test storage... 00:05:52.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:52.272 19:13:15 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:52.272 19:13:15 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:52.272 19:13:15 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:52.272 19:13:15 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:52.272 19:13:15 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:52.272 19:13:15 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:52.272 19:13:15 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:52.272 19:13:15 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:52.272 19:13:15 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:52.272 19:13:15 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:52.272 19:13:15 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:52.272 19:13:15 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:52.272 19:13:15 thread -- scripts/common.sh@345 -- # : 1 00:05:52.272 19:13:15 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:52.272 19:13:15 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:52.272 19:13:15 thread -- scripts/common.sh@365 -- # decimal 1 00:05:52.272 19:13:15 thread -- scripts/common.sh@353 -- # local d=1 00:05:52.272 19:13:15 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:52.272 19:13:15 thread -- scripts/common.sh@355 -- # echo 1 00:05:52.272 19:13:15 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:52.272 19:13:15 thread -- scripts/common.sh@366 -- # decimal 2 00:05:52.272 19:13:15 thread -- scripts/common.sh@353 -- # local d=2 00:05:52.272 19:13:15 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:52.272 19:13:15 thread -- scripts/common.sh@355 -- # echo 2 00:05:52.272 19:13:15 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:52.272 19:13:15 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:52.272 19:13:15 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:52.272 19:13:15 thread -- scripts/common.sh@368 -- # return 0 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.272 --rc genhtml_branch_coverage=1 00:05:52.272 --rc genhtml_function_coverage=1 00:05:52.272 --rc genhtml_legend=1 00:05:52.272 --rc geninfo_all_blocks=1 00:05:52.272 --rc geninfo_unexecuted_blocks=1 00:05:52.272 00:05:52.272 ' 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.272 --rc genhtml_branch_coverage=1 00:05:52.272 --rc genhtml_function_coverage=1 00:05:52.272 --rc genhtml_legend=1 00:05:52.272 --rc geninfo_all_blocks=1 00:05:52.272 --rc geninfo_unexecuted_blocks=1 00:05:52.272 00:05:52.272 ' 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:52.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.272 --rc genhtml_branch_coverage=1 00:05:52.272 --rc genhtml_function_coverage=1 00:05:52.272 --rc genhtml_legend=1 00:05:52.272 --rc geninfo_all_blocks=1 00:05:52.272 --rc geninfo_unexecuted_blocks=1 00:05:52.272 00:05:52.272 ' 00:05:52.272 19:13:15 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:52.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:52.273 --rc genhtml_branch_coverage=1 00:05:52.273 --rc genhtml_function_coverage=1 00:05:52.273 --rc genhtml_legend=1 00:05:52.273 --rc geninfo_all_blocks=1 00:05:52.273 --rc geninfo_unexecuted_blocks=1 00:05:52.273 00:05:52.273 ' 00:05:52.273 19:13:15 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.273 19:13:15 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:52.273 19:13:15 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.273 19:13:15 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.273 ************************************ 00:05:52.273 START TEST thread_poller_perf 00:05:52.273 ************************************ 00:05:52.273 19:13:15 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:52.273 [2024-10-17 19:13:15.961875] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:52.273 [2024-10-17 19:13:15.961944] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919965 ] 00:05:52.273 [2024-10-17 19:13:16.039092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.532 [2024-10-17 19:13:16.079038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.532 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:53.470 [2024-10-17T17:13:17.254Z] ====================================== 00:05:53.470 [2024-10-17T17:13:17.254Z] busy:2106832964 (cyc) 00:05:53.470 [2024-10-17T17:13:17.254Z] total_run_count: 417000 00:05:53.470 [2024-10-17T17:13:17.254Z] tsc_hz: 2100000000 (cyc) 00:05:53.470 [2024-10-17T17:13:17.254Z] ====================================== 00:05:53.470 [2024-10-17T17:13:17.254Z] poller_cost: 5052 (cyc), 2405 (nsec) 00:05:53.470 00:05:53.470 real 0m1.187s 00:05:53.470 user 0m1.101s 00:05:53.470 sys 0m0.081s 00:05:53.470 19:13:17 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.470 19:13:17 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.470 ************************************ 00:05:53.470 END TEST thread_poller_perf 00:05:53.470 ************************************ 00:05:53.470 19:13:17 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.470 19:13:17 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:53.470 19:13:17 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.470 19:13:17 thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.470 ************************************ 00:05:53.470 START TEST thread_poller_perf 00:05:53.470 ************************************ 00:05:53.470 19:13:17 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:53.470 [2024-10-17 19:13:17.220345] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:53.470 [2024-10-17 19:13:17.220420] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920165 ] 00:05:53.728 [2024-10-17 19:13:17.300230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.728 [2024-10-17 19:13:17.341100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.728 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:54.662 [2024-10-17T17:13:18.446Z] ====================================== 00:05:54.662 [2024-10-17T17:13:18.446Z] busy:2101578284 (cyc) 00:05:54.662 [2024-10-17T17:13:18.446Z] total_run_count: 5616000 00:05:54.662 [2024-10-17T17:13:18.446Z] tsc_hz: 2100000000 (cyc) 00:05:54.662 [2024-10-17T17:13:18.446Z] ====================================== 00:05:54.662 [2024-10-17T17:13:18.446Z] poller_cost: 374 (cyc), 178 (nsec) 00:05:54.662 00:05:54.662 real 0m1.179s 00:05:54.662 user 0m1.097s 00:05:54.662 sys 0m0.078s 00:05:54.662 19:13:18 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.662 19:13:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:54.662 ************************************ 00:05:54.662 END TEST thread_poller_perf 00:05:54.662 ************************************ 00:05:54.662 19:13:18 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:54.662 00:05:54.662 real 0m2.689s 00:05:54.662 user 0m2.360s 00:05:54.662 sys 0m0.345s 00:05:54.662 19:13:18 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:54.662 19:13:18 thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.662 ************************************ 00:05:54.662 END TEST thread 00:05:54.662 ************************************ 00:05:54.921 19:13:18 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:54.921 19:13:18 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.921 19:13:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.921 19:13:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.921 19:13:18 -- common/autotest_common.sh@10 -- # set +x 00:05:54.921 ************************************ 00:05:54.921 START TEST app_cmdline 00:05:54.921 ************************************ 00:05:54.921 19:13:18 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:54.921 * Looking for test storage... 00:05:54.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:54.921 19:13:18 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:54.921 19:13:18 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:54.921 19:13:18 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:54.921 19:13:18 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:54.921 19:13:18 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.922 19:13:18 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.922 --rc genhtml_branch_coverage=1 00:05:54.922 --rc genhtml_function_coverage=1 00:05:54.922 --rc genhtml_legend=1 00:05:54.922 --rc geninfo_all_blocks=1 00:05:54.922 --rc geninfo_unexecuted_blocks=1 00:05:54.922 00:05:54.922 ' 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.922 --rc genhtml_branch_coverage=1 00:05:54.922 --rc genhtml_function_coverage=1 00:05:54.922 --rc genhtml_legend=1 00:05:54.922 --rc geninfo_all_blocks=1 00:05:54.922 --rc geninfo_unexecuted_blocks=1 00:05:54.922 00:05:54.922 ' 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.922 --rc genhtml_branch_coverage=1 00:05:54.922 --rc genhtml_function_coverage=1 00:05:54.922 --rc genhtml_legend=1 00:05:54.922 --rc geninfo_all_blocks=1 00:05:54.922 --rc geninfo_unexecuted_blocks=1 00:05:54.922 00:05:54.922 ' 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:54.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.922 --rc genhtml_branch_coverage=1 00:05:54.922 --rc genhtml_function_coverage=1 00:05:54.922 --rc genhtml_legend=1 00:05:54.922 --rc geninfo_all_blocks=1 00:05:54.922 --rc geninfo_unexecuted_blocks=1 00:05:54.922 00:05:54.922 ' 00:05:54.922 19:13:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:54.922 19:13:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1920520 00:05:54.922 19:13:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1920520 00:05:54.922 19:13:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1920520 ']' 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:54.922 19:13:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.182 [2024-10-17 19:13:18.710876] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:05:55.182 [2024-10-17 19:13:18.710925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920520 ] 00:05:55.182 [2024-10-17 19:13:18.785873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.182 [2024-10-17 19:13:18.827613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.441 19:13:19 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:55.441 19:13:19 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:55.441 { 00:05:55.441 "version": "SPDK v25.01-pre git sha1 23f83d500", 00:05:55.441 "fields": { 00:05:55.441 "major": 25, 00:05:55.441 "minor": 1, 00:05:55.441 "patch": 0, 00:05:55.441 "suffix": "-pre", 00:05:55.441 "commit": "23f83d500" 00:05:55.441 } 00:05:55.441 } 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:55.441 19:13:19 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:55.441 19:13:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.441 19:13:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.700 19:13:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:55.700 19:13:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:55.700 19:13:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:55.700 request: 00:05:55.700 { 00:05:55.700 "method": "env_dpdk_get_mem_stats", 00:05:55.700 "req_id": 1 00:05:55.700 } 00:05:55.700 Got JSON-RPC error response 00:05:55.700 response: 00:05:55.700 { 00:05:55.700 "code": -32601, 00:05:55.700 "message": "Method not found" 00:05:55.700 } 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:55.700 19:13:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1920520 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1920520 ']' 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1920520 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.700 19:13:19 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1920520 00:05:55.959 19:13:19 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.959 19:13:19 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.959 19:13:19 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1920520' 00:05:55.959 killing process with pid 1920520 00:05:55.959 19:13:19 app_cmdline -- common/autotest_common.sh@969 -- # kill 1920520 00:05:55.959 19:13:19 app_cmdline -- common/autotest_common.sh@974 -- # wait 1920520 00:05:56.218 00:05:56.218 real 0m1.324s 00:05:56.218 user 0m1.542s 00:05:56.218 sys 0m0.446s 00:05:56.218 19:13:19 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.218 19:13:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.218 ************************************ 00:05:56.218 END TEST app_cmdline 00:05:56.218 ************************************ 00:05:56.218 19:13:19 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:56.218 19:13:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.218 19:13:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.218 19:13:19 -- common/autotest_common.sh@10 -- # set +x 00:05:56.218 ************************************ 00:05:56.218 START TEST version 00:05:56.218 ************************************ 00:05:56.218 19:13:19 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:56.218 * Looking for test storage... 00:05:56.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:56.218 19:13:19 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.218 19:13:19 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.218 19:13:19 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.480 19:13:20 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.480 19:13:20 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.480 19:13:20 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.480 19:13:20 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.480 19:13:20 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.480 19:13:20 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.480 19:13:20 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.480 19:13:20 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.480 19:13:20 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.480 19:13:20 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.480 19:13:20 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.480 19:13:20 version -- scripts/common.sh@344 -- # case "$op" in 00:05:56.480 19:13:20 version -- scripts/common.sh@345 -- # : 1 00:05:56.480 19:13:20 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.480 19:13:20 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.480 19:13:20 version -- scripts/common.sh@365 -- # decimal 1 00:05:56.480 19:13:20 version -- scripts/common.sh@353 -- # local d=1 00:05:56.480 19:13:20 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.480 19:13:20 version -- scripts/common.sh@355 -- # echo 1 00:05:56.480 19:13:20 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.480 19:13:20 version -- scripts/common.sh@366 -- # decimal 2 00:05:56.480 19:13:20 version -- scripts/common.sh@353 -- # local d=2 00:05:56.480 19:13:20 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.480 19:13:20 version -- scripts/common.sh@355 -- # echo 2 00:05:56.480 19:13:20 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.480 19:13:20 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.480 19:13:20 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.480 19:13:20 version -- scripts/common.sh@368 -- # return 0 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.480 --rc genhtml_branch_coverage=1 00:05:56.480 --rc genhtml_function_coverage=1 00:05:56.480 --rc genhtml_legend=1 00:05:56.480 --rc geninfo_all_blocks=1 00:05:56.480 --rc geninfo_unexecuted_blocks=1 00:05:56.480 00:05:56.480 ' 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.480 --rc genhtml_branch_coverage=1 00:05:56.480 --rc genhtml_function_coverage=1 00:05:56.480 --rc genhtml_legend=1 00:05:56.480 --rc geninfo_all_blocks=1 00:05:56.480 --rc geninfo_unexecuted_blocks=1 00:05:56.480 00:05:56.480 ' 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.480 --rc genhtml_branch_coverage=1 00:05:56.480 --rc genhtml_function_coverage=1 00:05:56.480 --rc genhtml_legend=1 00:05:56.480 --rc geninfo_all_blocks=1 00:05:56.480 --rc geninfo_unexecuted_blocks=1 00:05:56.480 00:05:56.480 ' 00:05:56.480 19:13:20 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.480 --rc genhtml_branch_coverage=1 00:05:56.480 --rc genhtml_function_coverage=1 00:05:56.480 --rc genhtml_legend=1 00:05:56.480 --rc geninfo_all_blocks=1 00:05:56.480 --rc geninfo_unexecuted_blocks=1 00:05:56.480 00:05:56.480 ' 00:05:56.480 19:13:20 version -- app/version.sh@17 -- # get_header_version major 00:05:56.480 19:13:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # cut -f2 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.481 19:13:20 version -- app/version.sh@17 -- # major=25 00:05:56.481 19:13:20 version -- app/version.sh@18 -- # get_header_version minor 00:05:56.481 19:13:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # cut -f2 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.481 19:13:20 version -- app/version.sh@18 -- # minor=1 00:05:56.481 19:13:20 version -- app/version.sh@19 -- # get_header_version patch 00:05:56.481 19:13:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # cut -f2 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.481 19:13:20 version -- app/version.sh@19 -- # patch=0 00:05:56.481 19:13:20 version -- app/version.sh@20 -- # get_header_version suffix 00:05:56.481 19:13:20 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # cut -f2 00:05:56.481 19:13:20 version -- app/version.sh@14 -- # tr -d '"' 00:05:56.481 19:13:20 version -- app/version.sh@20 -- # suffix=-pre 00:05:56.481 19:13:20 version -- app/version.sh@22 -- # version=25.1 00:05:56.481 19:13:20 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:56.481 19:13:20 version -- app/version.sh@28 -- # version=25.1rc0 00:05:56.481 19:13:20 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:56.481 19:13:20 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:56.481 19:13:20 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:56.481 19:13:20 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:56.481 00:05:56.481 real 0m0.247s 00:05:56.481 user 0m0.152s 00:05:56.481 sys 0m0.139s 00:05:56.481 19:13:20 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.481 19:13:20 version -- common/autotest_common.sh@10 -- # set +x 00:05:56.481 ************************************ 00:05:56.481 END TEST version 00:05:56.481 ************************************ 00:05:56.481 19:13:20 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:56.481 19:13:20 -- spdk/autotest.sh@194 -- # uname -s 00:05:56.481 19:13:20 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:56.481 19:13:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:56.481 19:13:20 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:56.481 19:13:20 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:56.481 19:13:20 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.481 19:13:20 -- common/autotest_common.sh@10 -- # set +x 00:05:56.481 19:13:20 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:56.481 19:13:20 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:56.481 19:13:20 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:56.481 19:13:20 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:56.481 19:13:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.481 19:13:20 -- common/autotest_common.sh@10 -- # set +x 00:05:56.481 ************************************ 00:05:56.481 START TEST nvmf_tcp 00:05:56.481 ************************************ 00:05:56.481 19:13:20 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:56.741 * Looking for test storage... 00:05:56.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.741 19:13:20 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.741 --rc genhtml_branch_coverage=1 00:05:56.741 --rc genhtml_function_coverage=1 00:05:56.741 --rc genhtml_legend=1 00:05:56.741 --rc geninfo_all_blocks=1 00:05:56.741 --rc geninfo_unexecuted_blocks=1 00:05:56.741 00:05:56.741 ' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.741 --rc genhtml_branch_coverage=1 00:05:56.741 --rc genhtml_function_coverage=1 00:05:56.741 --rc genhtml_legend=1 00:05:56.741 --rc geninfo_all_blocks=1 00:05:56.741 --rc geninfo_unexecuted_blocks=1 00:05:56.741 00:05:56.741 ' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.741 --rc genhtml_branch_coverage=1 00:05:56.741 --rc genhtml_function_coverage=1 00:05:56.741 --rc genhtml_legend=1 00:05:56.741 --rc geninfo_all_blocks=1 00:05:56.741 --rc geninfo_unexecuted_blocks=1 00:05:56.741 00:05:56.741 ' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.741 --rc genhtml_branch_coverage=1 00:05:56.741 --rc genhtml_function_coverage=1 00:05:56.741 --rc genhtml_legend=1 00:05:56.741 --rc geninfo_all_blocks=1 00:05:56.741 --rc geninfo_unexecuted_blocks=1 00:05:56.741 00:05:56.741 ' 00:05:56.741 19:13:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:56.741 19:13:20 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:56.741 19:13:20 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.741 19:13:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:56.741 ************************************ 00:05:56.741 START TEST nvmf_target_core 00:05:56.741 ************************************ 00:05:56.741 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:57.001 * Looking for test storage... 00:05:57.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.001 --rc genhtml_branch_coverage=1 00:05:57.001 --rc genhtml_function_coverage=1 00:05:57.001 --rc genhtml_legend=1 00:05:57.001 --rc geninfo_all_blocks=1 00:05:57.001 --rc geninfo_unexecuted_blocks=1 00:05:57.001 00:05:57.001 ' 00:05:57.001 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.002 --rc genhtml_branch_coverage=1 00:05:57.002 --rc genhtml_function_coverage=1 00:05:57.002 --rc genhtml_legend=1 00:05:57.002 --rc geninfo_all_blocks=1 00:05:57.002 --rc geninfo_unexecuted_blocks=1 00:05:57.002 00:05:57.002 ' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.002 ************************************ 00:05:57.002 START TEST nvmf_abort 00:05:57.002 ************************************ 00:05:57.002 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:57.002 * Looking for test storage... 00:05:57.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.262 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.263 --rc genhtml_branch_coverage=1 00:05:57.263 --rc genhtml_function_coverage=1 00:05:57.263 --rc genhtml_legend=1 00:05:57.263 --rc geninfo_all_blocks=1 00:05:57.263 --rc geninfo_unexecuted_blocks=1 00:05:57.263 00:05:57.263 ' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.263 --rc genhtml_branch_coverage=1 00:05:57.263 --rc genhtml_function_coverage=1 00:05:57.263 --rc genhtml_legend=1 00:05:57.263 --rc geninfo_all_blocks=1 00:05:57.263 --rc geninfo_unexecuted_blocks=1 00:05:57.263 00:05:57.263 ' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.263 --rc genhtml_branch_coverage=1 00:05:57.263 --rc genhtml_function_coverage=1 00:05:57.263 --rc genhtml_legend=1 00:05:57.263 --rc geninfo_all_blocks=1 00:05:57.263 --rc geninfo_unexecuted_blocks=1 00:05:57.263 00:05:57.263 ' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.263 --rc genhtml_branch_coverage=1 00:05:57.263 --rc genhtml_function_coverage=1 00:05:57.263 --rc genhtml_legend=1 00:05:57.263 --rc geninfo_all_blocks=1 00:05:57.263 --rc geninfo_unexecuted_blocks=1 00:05:57.263 00:05:57.263 ' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.263 19:13:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.831 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:03.832 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:03.832 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:03.832 Found net devices under 0000:86:00.0: cvl_0_0 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:03.832 Found net devices under 0000:86:00.1: cvl_0_1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:06:03.832 00:06:03.832 --- 10.0.0.2 ping statistics --- 00:06:03.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.832 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:06:03.832 00:06:03.832 --- 10.0.0.1 ping statistics --- 00:06:03.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.832 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1924099 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1924099 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1924099 ']' 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.832 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.833 19:13:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.833 [2024-10-17 19:13:27.008423] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:06:03.833 [2024-10-17 19:13:27.008467] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.833 [2024-10-17 19:13:27.086644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.833 [2024-10-17 19:13:27.129724] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.833 [2024-10-17 19:13:27.129762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.833 [2024-10-17 19:13:27.129769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.833 [2024-10-17 19:13:27.129775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.833 [2024-10-17 19:13:27.129780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.833 [2024-10-17 19:13:27.131163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.833 [2024-10-17 19:13:27.131272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.833 [2024-10-17 19:13:27.131272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.092 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.092 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:04.092 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:04.092 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.092 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 [2024-10-17 19:13:27.889114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 Malloc0 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 Delay0 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 [2024-10-17 19:13:27.967513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.351 19:13:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:04.351 [2024-10-17 19:13:28.104267] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:06.888 [2024-10-17 19:13:30.211424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a78e70 is same with the state(6) to be set 00:06:06.888 Initializing NVMe Controllers 00:06:06.888 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:06.888 controller IO queue size 128 less than required 00:06:06.888 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:06.888 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:06.888 Initialization complete. Launching workers. 00:06:06.888 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37720 00:06:06.888 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37781, failed to submit 62 00:06:06.888 success 37724, unsuccessful 57, failed 0 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.888 rmmod nvme_tcp 00:06:06.888 rmmod nvme_fabrics 00:06:06.888 rmmod nvme_keyring 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1924099 ']' 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1924099 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1924099 ']' 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1924099 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1924099 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1924099' 00:06:06.888 killing process with pid 1924099 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1924099 00:06:06.888 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1924099 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.889 19:13:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.426 00:06:09.426 real 0m11.913s 00:06:09.426 user 0m13.827s 00:06:09.426 sys 0m5.486s 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.426 ************************************ 00:06:09.426 END TEST nvmf_abort 00:06:09.426 ************************************ 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.426 ************************************ 00:06:09.426 START TEST nvmf_ns_hotplug_stress 00:06:09.426 ************************************ 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:09.426 * Looking for test storage... 00:06:09.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.426 --rc genhtml_branch_coverage=1 00:06:09.426 --rc genhtml_function_coverage=1 00:06:09.426 --rc genhtml_legend=1 00:06:09.426 --rc geninfo_all_blocks=1 00:06:09.426 --rc geninfo_unexecuted_blocks=1 00:06:09.426 00:06:09.426 ' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.426 --rc genhtml_branch_coverage=1 00:06:09.426 --rc genhtml_function_coverage=1 00:06:09.426 --rc genhtml_legend=1 00:06:09.426 --rc geninfo_all_blocks=1 00:06:09.426 --rc geninfo_unexecuted_blocks=1 00:06:09.426 00:06:09.426 ' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.426 --rc genhtml_branch_coverage=1 00:06:09.426 --rc genhtml_function_coverage=1 00:06:09.426 --rc genhtml_legend=1 00:06:09.426 --rc geninfo_all_blocks=1 00:06:09.426 --rc geninfo_unexecuted_blocks=1 00:06:09.426 00:06:09.426 ' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:09.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.426 --rc genhtml_branch_coverage=1 00:06:09.426 --rc genhtml_function_coverage=1 00:06:09.426 --rc genhtml_legend=1 00:06:09.426 --rc geninfo_all_blocks=1 00:06:09.426 --rc geninfo_unexecuted_blocks=1 00:06:09.426 00:06:09.426 ' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.426 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.427 19:13:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:14.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:14.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:14.861 Found net devices under 0000:86:00.0: cvl_0_0 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:14.861 Found net devices under 0000:86:00.1: cvl_0_1 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:06:14.861 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.862 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:15.121 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:15.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:15.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:06:15.122 00:06:15.122 --- 10.0.0.2 ping statistics --- 00:06:15.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.122 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:15.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:15.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:06:15.122 00:06:15.122 --- 10.0.0.1 ping statistics --- 00:06:15.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:15.122 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.122 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1928236 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1928236 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1928236 ']' 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.381 19:13:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.381 [2024-10-17 19:13:38.960622] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:06:15.381 [2024-10-17 19:13:38.960665] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.381 [2024-10-17 19:13:39.040344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.381 [2024-10-17 19:13:39.079891] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.381 [2024-10-17 19:13:39.079932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.381 [2024-10-17 19:13:39.079938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.381 [2024-10-17 19:13:39.079944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.381 [2024-10-17 19:13:39.079948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.381 [2024-10-17 19:13:39.081356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.381 [2024-10-17 19:13:39.081463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.381 [2024-10-17 19:13:39.081464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:16.318 19:13:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:16.318 [2024-10-17 19:13:40.007492] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.318 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.576 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.834 [2024-10-17 19:13:40.404949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.834 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:16.834 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:17.093 Malloc0 00:06:17.093 19:13:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:17.350 Delay0 00:06:17.350 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.608 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:17.866 NULL1 00:06:17.866 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:17.866 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:17.866 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1928729 00:06:17.866 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:17.866 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.241 Read completed with error (sct=0, sc=11) 00:06:19.241 19:13:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.500 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:19.500 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:19.500 true 00:06:19.500 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:19.500 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.437 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.697 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:20.697 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:20.697 true 00:06:20.697 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:20.697 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.956 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.214 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:21.214 19:13:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:21.473 true 00:06:21.473 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:21.473 19:13:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.410 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.669 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:22.669 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:22.929 true 00:06:22.929 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:22.929 19:13:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.757 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.757 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:23.757 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:24.017 true 00:06:24.017 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:24.017 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.276 19:13:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.534 19:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:24.534 19:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:24.534 true 00:06:24.534 19:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:24.534 19:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.912 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:25.912 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:26.170 true 00:06:26.170 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:26.171 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.108 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.108 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:27.108 19:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:27.367 true 00:06:27.367 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:27.367 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.626 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.885 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:27.885 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:27.886 true 00:06:27.886 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:27.886 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.145 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.404 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:28.404 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:28.662 true 00:06:28.662 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:28.662 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.600 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.600 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:29.600 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:29.859 true 00:06:29.859 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:29.859 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.118 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.377 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:30.377 19:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:30.377 true 00:06:30.377 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:30.377 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.754 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.754 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.754 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.754 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:32.013 true 00:06:32.013 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:32.013 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.013 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.271 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:32.271 19:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.529 true 00:06:32.529 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:32.529 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:33.907 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:33.907 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:34.167 true 00:06:34.167 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:34.167 19:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.104 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.104 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.104 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:35.104 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:35.363 true 00:06:35.363 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:35.363 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.621 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.622 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:35.622 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:35.881 true 00:06:35.881 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:35.881 19:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.259 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:37.259 19:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:37.518 true 00:06:37.518 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:37.518 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.454 19:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.454 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:38.454 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:38.713 true 00:06:38.713 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:38.713 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.713 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.972 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:38.972 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:39.240 true 00:06:39.240 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:39.240 19:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.618 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:40.618 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:40.618 true 00:06:40.877 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:40.877 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.444 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:41.703 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:41.704 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:41.963 true 00:06:41.963 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:41.963 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.223 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.482 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:42.482 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:42.482 true 00:06:42.482 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:42.482 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.860 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.861 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:43.861 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:43.861 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:44.120 true 00:06:44.120 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:44.120 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.055 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.055 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:45.056 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:45.314 true 00:06:45.314 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:45.314 19:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.575 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.575 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:45.575 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:45.833 true 00:06:45.833 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:45.833 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 19:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.211 19:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:47.211 19:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:47.470 true 00:06:47.470 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:47.470 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.406 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.406 Initializing NVMe Controllers 00:06:48.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.406 Controller IO queue size 128, less than required. 00:06:48.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.406 Controller IO queue size 128, less than required. 00:06:48.406 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:48.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:48.406 Initialization complete. Launching workers. 00:06:48.406 ======================================================== 00:06:48.406 Latency(us) 00:06:48.406 Device Information : IOPS MiB/s Average min max 00:06:48.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2155.25 1.05 41494.89 1535.17 1070813.39 00:06:48.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18097.33 8.84 7072.46 1701.58 368815.33 00:06:48.406 ======================================================== 00:06:48.406 Total : 20252.58 9.89 10735.65 1535.17 1070813.39 00:06:48.406 00:06:48.406 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:48.406 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:48.665 true 00:06:48.666 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1928729 00:06:48.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1928729) - No such process 00:06:48.666 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1928729 00:06:48.666 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.666 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.925 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:48.925 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:48.925 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:48.925 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.925 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:49.183 null0 00:06:49.183 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.183 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.183 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:49.442 null1 00:06:49.442 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.443 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.443 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:49.443 null2 00:06:49.443 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.443 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.443 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:49.702 null3 00:06:49.702 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.702 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.702 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:49.961 null4 00:06:49.961 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.961 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.961 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:50.220 null5 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:50.220 null6 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.220 19:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:50.480 null7 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.480 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1934303 1934306 1934309 1934313 1934315 1934318 1934322 1934325 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.481 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.740 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.740 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.740 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.740 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.741 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.741 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.741 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.741 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.000 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.260 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.260 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.260 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.260 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.261 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.520 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.780 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.781 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.781 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.781 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.781 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.040 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.330 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.331 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.331 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.628 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.887 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.146 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.405 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.405 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.405 19:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.405 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.663 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.921 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.179 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.179 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.179 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.179 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.180 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.437 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.437 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.437 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.437 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.437 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.438 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.438 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.697 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.697 rmmod nvme_tcp 00:06:54.697 rmmod nvme_fabrics 00:06:54.697 rmmod nvme_keyring 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1928236 ']' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1928236 ']' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1928236' 00:06:54.956 killing process with pid 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1928236 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.956 19:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.494 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:57.494 00:06:57.494 real 0m48.105s 00:06:57.494 user 3m14.886s 00:06:57.494 sys 0m15.435s 00:06:57.494 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.494 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 ************************************ 00:06:57.495 END TEST nvmf_ns_hotplug_stress 00:06:57.495 ************************************ 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:57.495 ************************************ 00:06:57.495 START TEST nvmf_delete_subsystem 00:06:57.495 ************************************ 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:57.495 * Looking for test storage... 00:06:57.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:57.495 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.495 --rc genhtml_branch_coverage=1 00:06:57.495 --rc genhtml_function_coverage=1 00:06:57.495 --rc genhtml_legend=1 00:06:57.495 --rc geninfo_all_blocks=1 00:06:57.495 --rc geninfo_unexecuted_blocks=1 00:06:57.495 00:06:57.495 ' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.495 --rc genhtml_branch_coverage=1 00:06:57.495 --rc genhtml_function_coverage=1 00:06:57.495 --rc genhtml_legend=1 00:06:57.495 --rc geninfo_all_blocks=1 00:06:57.495 --rc geninfo_unexecuted_blocks=1 00:06:57.495 00:06:57.495 ' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.495 --rc genhtml_branch_coverage=1 00:06:57.495 --rc genhtml_function_coverage=1 00:06:57.495 --rc genhtml_legend=1 00:06:57.495 --rc geninfo_all_blocks=1 00:06:57.495 --rc geninfo_unexecuted_blocks=1 00:06:57.495 00:06:57.495 ' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:57.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.495 --rc genhtml_branch_coverage=1 00:06:57.495 --rc genhtml_function_coverage=1 00:06:57.495 --rc genhtml_legend=1 00:06:57.495 --rc geninfo_all_blocks=1 00:06:57.495 --rc geninfo_unexecuted_blocks=1 00:06:57.495 00:06:57.495 ' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.495 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:57.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:57.496 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.069 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:04.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:04.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:04.070 Found net devices under 0000:86:00.0: cvl_0_0 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:04.070 Found net devices under 0000:86:00.1: cvl_0_1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:04.070 19:14:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:04.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:07:04.070 00:07:04.070 --- 10.0.0.2 ping statistics --- 00:07:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.070 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:07:04.070 00:07:04.070 --- 10.0.0.1 ping statistics --- 00:07:04.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.070 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1938745 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1938745 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1938745 ']' 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.070 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.070 [2024-10-17 19:14:27.150881] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:04.070 [2024-10-17 19:14:27.150931] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.070 [2024-10-17 19:14:27.231479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.070 [2024-10-17 19:14:27.272580] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.070 [2024-10-17 19:14:27.272620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.070 [2024-10-17 19:14:27.272628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:04.071 [2024-10-17 19:14:27.272634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:04.071 [2024-10-17 19:14:27.272639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.071 [2024-10-17 19:14:27.273812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.071 [2024-10-17 19:14:27.273812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 [2024-10-17 19:14:27.409506] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 [2024-10-17 19:14:27.429701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 NULL1 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 Delay0 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1938776 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:04.071 19:14:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:04.071 [2024-10-17 19:14:27.541577] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.975 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:05.975 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.975 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 Write completed with error (sct=0, sc=8) 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.975 starting I/O failed: -6 00:07:05.975 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 [2024-10-17 19:14:29.750383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f07ec00cfe0 is same with the state(6) to be set 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Read completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:05.976 Write completed with error (sct=0, sc=8) 00:07:07.357 [2024-10-17 19:14:30.719783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6ba70 is same with the state(6) to be set 00:07:07.357 Write completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Write completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Write completed with error (sct=0, sc=8) 00:07:07.357 Write completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Write completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.357 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 [2024-10-17 19:14:30.752067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a750 is same with the state(6) to be set 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 [2024-10-17 19:14:30.753061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a930 is same with the state(6) to be set 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 [2024-10-17 19:14:30.753148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f07ec00d640 is same with the state(6) to be set 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Write completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 Read completed with error (sct=0, sc=8) 00:07:07.358 [2024-10-17 19:14:30.753964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6a390 is same with the state(6) to be set 00:07:07.358 Initializing NVMe Controllers 00:07:07.358 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:07.358 Controller IO queue size 128, less than required. 00:07:07.358 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:07.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:07.358 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:07.358 Initialization complete. Launching workers. 00:07:07.358 ======================================================== 00:07:07.358 Latency(us) 00:07:07.358 Device Information : IOPS MiB/s Average min max 00:07:07.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.52 0.09 956769.26 373.76 1009944.45 00:07:07.358 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.73 0.07 895733.74 236.71 1010036.33 00:07:07.358 ======================================================== 00:07:07.358 Total : 341.25 0.17 929810.42 236.71 1010036.33 00:07:07.358 00:07:07.358 [2024-10-17 19:14:30.754646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb6ba70 (9): Bad file descriptor 00:07:07.358 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:07.358 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.358 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:07.358 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1938776 00:07:07.358 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1938776 00:07:07.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1938776) - No such process 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1938776 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1938776 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1938776 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.619 [2024-10-17 19:14:31.285403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1939470 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:07.619 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.619 [2024-10-17 19:14:31.372718] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:08.187 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.187 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:08.187 19:14:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.755 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.755 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:08.755 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.324 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.324 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:09.324 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:09.583 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:09.583 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:09.583 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.151 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.151 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:10.151 19:14:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.719 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:10.719 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:10.719 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:10.978 Initializing NVMe Controllers 00:07:10.978 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:10.978 Controller IO queue size 128, less than required. 00:07:10.978 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:10.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:10.978 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:10.978 Initialization complete. Launching workers. 00:07:10.978 ======================================================== 00:07:10.978 Latency(us) 00:07:10.978 Device Information : IOPS MiB/s Average min max 00:07:10.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002408.38 1000136.48 1007490.12 00:07:10.978 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004020.11 1000179.60 1041483.09 00:07:10.978 ======================================================== 00:07:10.978 Total : 256.00 0.12 1003214.25 1000136.48 1041483.09 00:07:10.978 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1939470 00:07:11.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1939470) - No such process 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1939470 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.237 rmmod nvme_tcp 00:07:11.237 rmmod nvme_fabrics 00:07:11.237 rmmod nvme_keyring 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1938745 ']' 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1938745 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1938745 ']' 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1938745 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1938745 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1938745' 00:07:11.237 killing process with pid 1938745 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1938745 00:07:11.237 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1938745 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.496 19:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.401 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:13.401 00:07:13.401 real 0m16.317s 00:07:13.401 user 0m29.527s 00:07:13.401 sys 0m5.539s 00:07:13.401 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.401 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:13.401 ************************************ 00:07:13.401 END TEST nvmf_delete_subsystem 00:07:13.401 ************************************ 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.668 ************************************ 00:07:13.668 START TEST nvmf_host_management 00:07:13.668 ************************************ 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:13.668 * Looking for test storage... 00:07:13.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:13.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.668 --rc genhtml_branch_coverage=1 00:07:13.668 --rc genhtml_function_coverage=1 00:07:13.668 --rc genhtml_legend=1 00:07:13.668 --rc geninfo_all_blocks=1 00:07:13.668 --rc geninfo_unexecuted_blocks=1 00:07:13.668 00:07:13.668 ' 00:07:13.668 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:13.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.668 --rc genhtml_branch_coverage=1 00:07:13.669 --rc genhtml_function_coverage=1 00:07:13.669 --rc genhtml_legend=1 00:07:13.669 --rc geninfo_all_blocks=1 00:07:13.669 --rc geninfo_unexecuted_blocks=1 00:07:13.669 00:07:13.669 ' 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.669 --rc genhtml_branch_coverage=1 00:07:13.669 --rc genhtml_function_coverage=1 00:07:13.669 --rc genhtml_legend=1 00:07:13.669 --rc geninfo_all_blocks=1 00:07:13.669 --rc geninfo_unexecuted_blocks=1 00:07:13.669 00:07:13.669 ' 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:13.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.669 --rc genhtml_branch_coverage=1 00:07:13.669 --rc genhtml_function_coverage=1 00:07:13.669 --rc genhtml_legend=1 00:07:13.669 --rc geninfo_all_blocks=1 00:07:13.669 --rc geninfo_unexecuted_blocks=1 00:07:13.669 00:07:13.669 ' 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:13.669 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:13.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:13.928 19:14:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:20.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:20.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:20.502 Found net devices under 0000:86:00.0: cvl_0_0 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:20.502 Found net devices under 0000:86:00.1: cvl_0_1 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.502 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:20.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:07:20.503 00:07:20.503 --- 10.0.0.2 ping statistics --- 00:07:20.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.503 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:07:20.503 00:07:20.503 --- 10.0.0.1 ping statistics --- 00:07:20.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.503 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1943693 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1943693 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1943693 ']' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 [2024-10-17 19:14:43.578242] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:20.503 [2024-10-17 19:14:43.578287] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.503 [2024-10-17 19:14:43.656924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.503 [2024-10-17 19:14:43.699207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.503 [2024-10-17 19:14:43.699243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.503 [2024-10-17 19:14:43.699251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.503 [2024-10-17 19:14:43.699257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.503 [2024-10-17 19:14:43.699261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.503 [2024-10-17 19:14:43.700868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.503 [2024-10-17 19:14:43.700978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.503 [2024-10-17 19:14:43.701105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.503 [2024-10-17 19:14:43.701105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 [2024-10-17 19:14:43.837341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 Malloc0 00:07:20.503 [2024-10-17 19:14:43.906230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1943742 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1943742 /var/tmp/bdevperf.sock 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1943742 ']' 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:20.503 { 00:07:20.503 "params": { 00:07:20.503 "name": "Nvme$subsystem", 00:07:20.503 "trtype": "$TEST_TRANSPORT", 00:07:20.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.503 "adrfam": "ipv4", 00:07:20.503 "trsvcid": "$NVMF_PORT", 00:07:20.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.503 "hdgst": ${hdgst:-false}, 00:07:20.503 "ddgst": ${ddgst:-false} 00:07:20.503 }, 00:07:20.503 "method": "bdev_nvme_attach_controller" 00:07:20.503 } 00:07:20.503 EOF 00:07:20.503 )") 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:20.503 19:14:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:20.503 "params": { 00:07:20.503 "name": "Nvme0", 00:07:20.503 "trtype": "tcp", 00:07:20.503 "traddr": "10.0.0.2", 00:07:20.503 "adrfam": "ipv4", 00:07:20.503 "trsvcid": "4420", 00:07:20.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.504 "hdgst": false, 00:07:20.504 "ddgst": false 00:07:20.504 }, 00:07:20.504 "method": "bdev_nvme_attach_controller" 00:07:20.504 }' 00:07:20.504 [2024-10-17 19:14:44.001444] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:20.504 [2024-10-17 19:14:44.001488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943742 ] 00:07:20.504 [2024-10-17 19:14:44.076941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.504 [2024-10-17 19:14:44.117684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.763 Running I/O for 10 seconds... 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=91 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 91 -ge 100 ']' 00:07:20.764 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.023 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.023 [2024-10-17 19:14:44.805078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.023 [2024-10-17 19:14:44.805393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x74f2c0 is same with the state(6) to be set 00:07:21.284 [2024-10-17 19:14:44.809308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.284 [2024-10-17 19:14:44.809932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.284 [2024-10-17 19:14:44.809939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.809947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.809962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.809970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.809976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.809984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.809998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.285 [2024-10-17 19:14:44.810019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.285 [2024-10-17 19:14:44.810296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.285 [2024-10-17 19:14:44.810373] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2580850 was disconnected and freed. reset controller. 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.285 [2024-10-17 19:14:44.811286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:21.285 task offset: 100736 on job bdev=Nvme0n1 fails 00:07:21.285 00:07:21.285 Latency(us) 00:07:21.285 [2024-10-17T17:14:45.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.285 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.285 Job: Nvme0n1 ended in about 0.41 seconds with error 00:07:21.285 Verification LBA range: start 0x0 length 0x400 00:07:21.285 Nvme0n1 : 0.41 1938.24 121.14 157.62 0.00 29726.16 1365.33 27213.04 00:07:21.285 [2024-10-17T17:14:45.069Z] =================================================================================================================== 00:07:21.285 [2024-10-17T17:14:45.069Z] Total : 1938.24 121.14 157.62 0.00 29726.16 1365.33 27213.04 00:07:21.285 [2024-10-17 19:14:44.813679] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.285 [2024-10-17 19:14:44.813701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2367600 (9): Bad file descriptor 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.285 19:14:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:21.285 [2024-10-17 19:14:44.824316] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1943742 00:07:22.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1943742) - No such process 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:22.223 { 00:07:22.223 "params": { 00:07:22.223 "name": "Nvme$subsystem", 00:07:22.223 "trtype": "$TEST_TRANSPORT", 00:07:22.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.223 "adrfam": "ipv4", 00:07:22.223 "trsvcid": "$NVMF_PORT", 00:07:22.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.223 "hdgst": ${hdgst:-false}, 00:07:22.223 "ddgst": ${ddgst:-false} 00:07:22.223 }, 00:07:22.223 "method": "bdev_nvme_attach_controller" 00:07:22.223 } 00:07:22.223 EOF 00:07:22.223 )") 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:07:22.223 19:14:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:22.223 "params": { 00:07:22.223 "name": "Nvme0", 00:07:22.223 "trtype": "tcp", 00:07:22.223 "traddr": "10.0.0.2", 00:07:22.223 "adrfam": "ipv4", 00:07:22.223 "trsvcid": "4420", 00:07:22.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:22.223 "hdgst": false, 00:07:22.223 "ddgst": false 00:07:22.223 }, 00:07:22.223 "method": "bdev_nvme_attach_controller" 00:07:22.223 }' 00:07:22.223 [2024-10-17 19:14:45.878590] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:22.223 [2024-10-17 19:14:45.878644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1943997 ] 00:07:22.223 [2024-10-17 19:14:45.953956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.223 [2024-10-17 19:14:45.992424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.482 Running I/O for 1 seconds... 00:07:23.860 2001.00 IOPS, 125.06 MiB/s 00:07:23.860 Latency(us) 00:07:23.860 [2024-10-17T17:14:47.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.860 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:23.860 Verification LBA range: start 0x0 length 0x400 00:07:23.860 Nvme0n1 : 1.01 2050.27 128.14 0.00 0.00 30623.52 1880.26 26838.55 00:07:23.860 [2024-10-17T17:14:47.644Z] =================================================================================================================== 00:07:23.860 [2024-10-17T17:14:47.644Z] Total : 2050.27 128.14 0.00 0.00 30623.52 1880.26 26838.55 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.860 rmmod nvme_tcp 00:07:23.860 rmmod nvme_fabrics 00:07:23.860 rmmod nvme_keyring 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1943693 ']' 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1943693 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1943693 ']' 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1943693 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1943693 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1943693' 00:07:23.860 killing process with pid 1943693 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1943693 00:07:23.860 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1943693 00:07:24.119 [2024-10-17 19:14:47.704420] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.119 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.120 19:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.025 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:26.025 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:26.025 00:07:26.025 real 0m12.544s 00:07:26.025 user 0m20.012s 00:07:26.025 sys 0m5.672s 00:07:26.025 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.025 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.025 ************************************ 00:07:26.025 END TEST nvmf_host_management 00:07:26.025 ************************************ 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 ************************************ 00:07:26.285 START TEST nvmf_lvol 00:07:26.285 ************************************ 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.285 * Looking for test storage... 00:07:26.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:26.285 19:14:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.285 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:26.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.286 --rc genhtml_branch_coverage=1 00:07:26.286 --rc genhtml_function_coverage=1 00:07:26.286 --rc genhtml_legend=1 00:07:26.286 --rc geninfo_all_blocks=1 00:07:26.286 --rc geninfo_unexecuted_blocks=1 00:07:26.286 00:07:26.286 ' 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:26.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.286 --rc genhtml_branch_coverage=1 00:07:26.286 --rc genhtml_function_coverage=1 00:07:26.286 --rc genhtml_legend=1 00:07:26.286 --rc geninfo_all_blocks=1 00:07:26.286 --rc geninfo_unexecuted_blocks=1 00:07:26.286 00:07:26.286 ' 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.286 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:26.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:26.546 19:14:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.119 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.119 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.119 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.120 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.120 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:33.120 19:14:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:33.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:07:33.120 00:07:33.120 --- 10.0.0.2 ping statistics --- 00:07:33.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.120 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:07:33.120 00:07:33.120 --- 10.0.0.1 ping statistics --- 00:07:33.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.120 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1947988 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1947988 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1947988 ']' 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.120 19:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.120 [2024-10-17 19:14:56.178719] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:33.120 [2024-10-17 19:14:56.178762] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.121 [2024-10-17 19:14:56.255754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.121 [2024-10-17 19:14:56.297696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.121 [2024-10-17 19:14:56.297727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.121 [2024-10-17 19:14:56.297735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.121 [2024-10-17 19:14:56.297740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.121 [2024-10-17 19:14:56.297745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.121 [2024-10-17 19:14:56.299007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.121 [2024-10-17 19:14:56.299115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.121 [2024-10-17 19:14:56.299117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.379 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.648 [2024-10-17 19:14:57.224948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.648 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:33.913 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:33.913 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:34.172 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:34.172 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:34.172 19:14:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:34.430 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9d2d4d88-e090-4ac7-aca6-63c275431285 00:07:34.430 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d2d4d88-e090-4ac7-aca6-63c275431285 lvol 20 00:07:34.689 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=652921de-9ea0-437f-954f-8b25687e2880 00:07:34.689 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.948 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 652921de-9ea0-437f-954f-8b25687e2880 00:07:34.948 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.207 [2024-10-17 19:14:58.910963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.207 19:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.466 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1948488 00:07:35.466 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:35.466 19:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:36.402 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 652921de-9ea0-437f-954f-8b25687e2880 MY_SNAPSHOT 00:07:36.661 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3011b684-5bc5-4409-a49b-e4786e9d5b0a 00:07:36.661 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 652921de-9ea0-437f-954f-8b25687e2880 30 00:07:36.920 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3011b684-5bc5-4409-a49b-e4786e9d5b0a MY_CLONE 00:07:37.179 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=16194e5e-20eb-4ed2-a62e-de6615dc8962 00:07:37.179 19:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 16194e5e-20eb-4ed2-a62e-de6615dc8962 00:07:37.746 19:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1948488 00:07:45.866 Initializing NVMe Controllers 00:07:45.866 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:45.866 Controller IO queue size 128, less than required. 00:07:45.866 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:45.866 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:45.866 Initialization complete. Launching workers. 00:07:45.866 ======================================================== 00:07:45.866 Latency(us) 00:07:45.866 Device Information : IOPS MiB/s Average min max 00:07:45.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12385.31 48.38 10336.72 1575.64 57968.27 00:07:45.866 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12271.31 47.93 10434.51 3538.80 59010.94 00:07:45.866 ======================================================== 00:07:45.866 Total : 24656.62 96.31 10385.39 1575.64 59010.94 00:07:45.866 00:07:45.866 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.125 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 652921de-9ea0-437f-954f-8b25687e2880 00:07:46.384 19:15:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d2d4d88-e090-4ac7-aca6-63c275431285 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.384 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.643 rmmod nvme_tcp 00:07:46.643 rmmod nvme_fabrics 00:07:46.643 rmmod nvme_keyring 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1947988 ']' 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1947988 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1947988 ']' 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1947988 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1947988 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1947988' 00:07:46.643 killing process with pid 1947988 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1947988 00:07:46.643 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1947988 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.902 19:15:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.806 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:48.806 00:07:48.806 real 0m22.696s 00:07:48.806 user 1m5.245s 00:07:48.806 sys 0m7.789s 00:07:48.806 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.806 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.806 ************************************ 00:07:48.806 END TEST nvmf_lvol 00:07:48.806 ************************************ 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.068 ************************************ 00:07:49.068 START TEST nvmf_lvs_grow 00:07:49.068 ************************************ 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:49.068 * Looking for test storage... 00:07:49.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.068 --rc genhtml_branch_coverage=1 00:07:49.068 --rc genhtml_function_coverage=1 00:07:49.068 --rc genhtml_legend=1 00:07:49.068 --rc geninfo_all_blocks=1 00:07:49.068 --rc geninfo_unexecuted_blocks=1 00:07:49.068 00:07:49.068 ' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.068 --rc genhtml_branch_coverage=1 00:07:49.068 --rc genhtml_function_coverage=1 00:07:49.068 --rc genhtml_legend=1 00:07:49.068 --rc geninfo_all_blocks=1 00:07:49.068 --rc geninfo_unexecuted_blocks=1 00:07:49.068 00:07:49.068 ' 00:07:49.068 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.069 --rc genhtml_branch_coverage=1 00:07:49.069 --rc genhtml_function_coverage=1 00:07:49.069 --rc genhtml_legend=1 00:07:49.069 --rc geninfo_all_blocks=1 00:07:49.069 --rc geninfo_unexecuted_blocks=1 00:07:49.069 00:07:49.069 ' 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.069 --rc genhtml_branch_coverage=1 00:07:49.069 --rc genhtml_function_coverage=1 00:07:49.069 --rc genhtml_legend=1 00:07:49.069 --rc geninfo_all_blocks=1 00:07:49.069 --rc geninfo_unexecuted_blocks=1 00:07:49.069 00:07:49.069 ' 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.069 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.341 19:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.790 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:55.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:55.050 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.050 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:55.051 Found net devices under 0000:86:00.0: cvl_0_0 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:55.051 Found net devices under 0000:86:00.1: cvl_0_1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:07:55.051 00:07:55.051 --- 10.0.0.2 ping statistics --- 00:07:55.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.051 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:07:55.051 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:55.310 00:07:55.310 --- 10.0.0.1 ping statistics --- 00:07:55.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.310 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1954389 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1954389 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1954389 ']' 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:55.311 19:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.311 [2024-10-17 19:15:18.938597] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:55.311 [2024-10-17 19:15:18.938662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.311 [2024-10-17 19:15:19.016144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.311 [2024-10-17 19:15:19.055935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.311 [2024-10-17 19:15:19.055970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.311 [2024-10-17 19:15:19.055977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.311 [2024-10-17 19:15:19.055987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.311 [2024-10-17 19:15:19.055992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.311 [2024-10-17 19:15:19.056526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.570 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:55.830 [2024-10-17 19:15:19.358655] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.830 ************************************ 00:07:55.830 START TEST lvs_grow_clean 00:07:55.830 ************************************ 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.830 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.089 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:56.089 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:56.089 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e2c41704-0dc8-4564-8e8e-6a29bf326993 00:07:56.089 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:07:56.089 19:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.349 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.349 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.349 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e2c41704-0dc8-4564-8e8e-6a29bf326993 lvol 150 00:07:56.607 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c6508515-57b2-490f-af08-cdf3541dafb7 00:07:56.607 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.607 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:56.866 [2024-10-17 19:15:20.423524] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:56.866 [2024-10-17 19:15:20.423576] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:56.866 true 00:07:56.866 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:07:56.866 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:56.866 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:56.866 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.125 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6508515-57b2-490f-af08-cdf3541dafb7 00:07:57.384 19:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.384 [2024-10-17 19:15:21.157758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1954888 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1954888 /var/tmp/bdevperf.sock 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1954888 ']' 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.643 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:57.643 [2024-10-17 19:15:21.385318] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:07:57.643 [2024-10-17 19:15:21.385362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1954888 ] 00:07:57.903 [2024-10-17 19:15:21.458964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.903 [2024-10-17 19:15:21.501769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.903 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.903 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:57.903 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:58.162 Nvme0n1 00:07:58.162 19:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.421 [ 00:07:58.421 { 00:07:58.421 "name": "Nvme0n1", 00:07:58.421 "aliases": [ 00:07:58.421 "c6508515-57b2-490f-af08-cdf3541dafb7" 00:07:58.421 ], 00:07:58.421 "product_name": "NVMe disk", 00:07:58.421 "block_size": 4096, 00:07:58.421 "num_blocks": 38912, 00:07:58.421 "uuid": "c6508515-57b2-490f-af08-cdf3541dafb7", 00:07:58.421 "numa_id": 1, 00:07:58.421 "assigned_rate_limits": { 00:07:58.421 "rw_ios_per_sec": 0, 00:07:58.421 "rw_mbytes_per_sec": 0, 00:07:58.421 "r_mbytes_per_sec": 0, 00:07:58.421 "w_mbytes_per_sec": 0 00:07:58.421 }, 00:07:58.421 "claimed": false, 00:07:58.421 "zoned": false, 00:07:58.421 "supported_io_types": { 00:07:58.421 "read": true, 00:07:58.421 "write": true, 00:07:58.421 "unmap": true, 00:07:58.421 "flush": true, 00:07:58.421 "reset": true, 00:07:58.421 "nvme_admin": true, 00:07:58.421 "nvme_io": true, 00:07:58.421 "nvme_io_md": false, 00:07:58.421 "write_zeroes": true, 00:07:58.421 "zcopy": false, 00:07:58.421 "get_zone_info": false, 00:07:58.421 "zone_management": false, 00:07:58.421 "zone_append": false, 00:07:58.421 "compare": true, 00:07:58.421 "compare_and_write": true, 00:07:58.421 "abort": true, 00:07:58.421 "seek_hole": false, 00:07:58.421 "seek_data": false, 00:07:58.421 "copy": true, 00:07:58.421 "nvme_iov_md": false 00:07:58.421 }, 00:07:58.421 "memory_domains": [ 00:07:58.421 { 00:07:58.421 "dma_device_id": "system", 00:07:58.421 "dma_device_type": 1 00:07:58.421 } 00:07:58.421 ], 00:07:58.421 "driver_specific": { 00:07:58.421 "nvme": [ 00:07:58.421 { 00:07:58.421 "trid": { 00:07:58.421 "trtype": "TCP", 00:07:58.421 "adrfam": "IPv4", 00:07:58.421 "traddr": "10.0.0.2", 00:07:58.421 "trsvcid": "4420", 00:07:58.421 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:58.421 }, 00:07:58.421 "ctrlr_data": { 00:07:58.421 "cntlid": 1, 00:07:58.421 "vendor_id": "0x8086", 00:07:58.421 "model_number": "SPDK bdev Controller", 00:07:58.421 "serial_number": "SPDK0", 00:07:58.421 "firmware_revision": "25.01", 00:07:58.421 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.421 "oacs": { 00:07:58.421 "security": 0, 00:07:58.421 "format": 0, 00:07:58.421 "firmware": 0, 00:07:58.421 "ns_manage": 0 00:07:58.421 }, 00:07:58.421 "multi_ctrlr": true, 00:07:58.421 "ana_reporting": false 00:07:58.421 }, 00:07:58.421 "vs": { 00:07:58.421 "nvme_version": "1.3" 00:07:58.421 }, 00:07:58.421 "ns_data": { 00:07:58.421 "id": 1, 00:07:58.421 "can_share": true 00:07:58.421 } 00:07:58.421 } 00:07:58.421 ], 00:07:58.421 "mp_policy": "active_passive" 00:07:58.421 } 00:07:58.421 } 00:07:58.421 ] 00:07:58.421 19:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1954908 00:07:58.421 19:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.421 19:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:58.421 Running I/O for 10 seconds... 00:07:59.797 Latency(us) 00:07:59.797 [2024-10-17T17:15:23.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.798 Nvme0n1 : 1.00 23369.00 91.29 0.00 0.00 0.00 0.00 0.00 00:07:59.798 [2024-10-17T17:15:23.582Z] =================================================================================================================== 00:07:59.798 [2024-10-17T17:15:23.582Z] Total : 23369.00 91.29 0.00 0.00 0.00 0.00 0.00 00:07:59.798 00:08:00.365 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:00.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.624 Nvme0n1 : 2.00 23529.00 91.91 0.00 0.00 0.00 0.00 0.00 00:08:00.624 [2024-10-17T17:15:24.408Z] =================================================================================================================== 00:08:00.624 [2024-10-17T17:15:24.408Z] Total : 23529.00 91.91 0.00 0.00 0.00 0.00 0.00 00:08:00.624 00:08:00.624 true 00:08:00.624 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:00.624 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.883 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.883 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.883 19:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1954908 00:08:01.450 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.450 Nvme0n1 : 3.00 23556.00 92.02 0.00 0.00 0.00 0.00 0.00 00:08:01.450 [2024-10-17T17:15:25.234Z] =================================================================================================================== 00:08:01.450 [2024-10-17T17:15:25.234Z] Total : 23556.00 92.02 0.00 0.00 0.00 0.00 0.00 00:08:01.450 00:08:02.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.828 Nvme0n1 : 4.00 23653.50 92.40 0.00 0.00 0.00 0.00 0.00 00:08:02.828 [2024-10-17T17:15:26.612Z] =================================================================================================================== 00:08:02.828 [2024-10-17T17:15:26.612Z] Total : 23653.50 92.40 0.00 0.00 0.00 0.00 0.00 00:08:02.828 00:08:03.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.411 Nvme0n1 : 5.00 23679.40 92.50 0.00 0.00 0.00 0.00 0.00 00:08:03.411 [2024-10-17T17:15:27.195Z] =================================================================================================================== 00:08:03.411 [2024-10-17T17:15:27.195Z] Total : 23679.40 92.50 0.00 0.00 0.00 0.00 0.00 00:08:03.411 00:08:04.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.796 Nvme0n1 : 6.00 23708.33 92.61 0.00 0.00 0.00 0.00 0.00 00:08:04.796 [2024-10-17T17:15:28.580Z] =================================================================================================================== 00:08:04.796 [2024-10-17T17:15:28.580Z] Total : 23708.33 92.61 0.00 0.00 0.00 0.00 0.00 00:08:04.796 00:08:05.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.732 Nvme0n1 : 7.00 23736.29 92.72 0.00 0.00 0.00 0.00 0.00 00:08:05.732 [2024-10-17T17:15:29.516Z] =================================================================================================================== 00:08:05.732 [2024-10-17T17:15:29.516Z] Total : 23736.29 92.72 0.00 0.00 0.00 0.00 0.00 00:08:05.732 00:08:06.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.667 Nvme0n1 : 8.00 23746.75 92.76 0.00 0.00 0.00 0.00 0.00 00:08:06.667 [2024-10-17T17:15:30.452Z] =================================================================================================================== 00:08:06.668 [2024-10-17T17:15:30.452Z] Total : 23746.75 92.76 0.00 0.00 0.00 0.00 0.00 00:08:06.668 00:08:07.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.604 Nvme0n1 : 9.00 23721.44 92.66 0.00 0.00 0.00 0.00 0.00 00:08:07.604 [2024-10-17T17:15:31.388Z] =================================================================================================================== 00:08:07.604 [2024-10-17T17:15:31.388Z] Total : 23721.44 92.66 0.00 0.00 0.00 0.00 0.00 00:08:07.604 00:08:08.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.542 Nvme0n1 : 10.00 23751.20 92.78 0.00 0.00 0.00 0.00 0.00 00:08:08.542 [2024-10-17T17:15:32.326Z] =================================================================================================================== 00:08:08.542 [2024-10-17T17:15:32.326Z] Total : 23751.20 92.78 0.00 0.00 0.00 0.00 0.00 00:08:08.542 00:08:08.542 00:08:08.542 Latency(us) 00:08:08.542 [2024-10-17T17:15:32.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.542 Nvme0n1 : 10.01 23750.78 92.78 0.00 0.00 5386.29 1458.96 10797.84 00:08:08.542 [2024-10-17T17:15:32.326Z] =================================================================================================================== 00:08:08.542 [2024-10-17T17:15:32.326Z] Total : 23750.78 92.78 0.00 0.00 5386.29 1458.96 10797.84 00:08:08.542 { 00:08:08.542 "results": [ 00:08:08.542 { 00:08:08.542 "job": "Nvme0n1", 00:08:08.542 "core_mask": "0x2", 00:08:08.542 "workload": "randwrite", 00:08:08.542 "status": "finished", 00:08:08.542 "queue_depth": 128, 00:08:08.542 "io_size": 4096, 00:08:08.542 "runtime": 10.005567, 00:08:08.542 "iops": 23750.777941919732, 00:08:08.542 "mibps": 92.77647633562395, 00:08:08.542 "io_failed": 0, 00:08:08.542 "io_timeout": 0, 00:08:08.542 "avg_latency_us": 5386.2910190203675, 00:08:08.542 "min_latency_us": 1458.9561904761904, 00:08:08.542 "max_latency_us": 10797.83619047619 00:08:08.542 } 00:08:08.542 ], 00:08:08.542 "core_count": 1 00:08:08.542 } 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1954888 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1954888 ']' 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1954888 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1954888 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1954888' 00:08:08.542 killing process with pid 1954888 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1954888 00:08:08.542 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.542 00:08:08.542 Latency(us) 00:08:08.542 [2024-10-17T17:15:32.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.542 [2024-10-17T17:15:32.326Z] =================================================================================================================== 00:08:08.542 [2024-10-17T17:15:32.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.542 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1954888 00:08:08.802 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.061 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.061 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:09.061 19:15:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.320 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.320 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:09.320 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:09.579 [2024-10-17 19:15:33.172249] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:09.579 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:09.839 request: 00:08:09.839 { 00:08:09.839 "uuid": "e2c41704-0dc8-4564-8e8e-6a29bf326993", 00:08:09.839 "method": "bdev_lvol_get_lvstores", 00:08:09.839 "req_id": 1 00:08:09.839 } 00:08:09.839 Got JSON-RPC error response 00:08:09.839 response: 00:08:09.839 { 00:08:09.839 "code": -19, 00:08:09.839 "message": "No such device" 00:08:09.839 } 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.839 aio_bdev 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c6508515-57b2-490f-af08-cdf3541dafb7 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=c6508515-57b2-490f-af08-cdf3541dafb7 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.839 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.098 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c6508515-57b2-490f-af08-cdf3541dafb7 -t 2000 00:08:10.357 [ 00:08:10.357 { 00:08:10.357 "name": "c6508515-57b2-490f-af08-cdf3541dafb7", 00:08:10.357 "aliases": [ 00:08:10.357 "lvs/lvol" 00:08:10.357 ], 00:08:10.357 "product_name": "Logical Volume", 00:08:10.357 "block_size": 4096, 00:08:10.357 "num_blocks": 38912, 00:08:10.357 "uuid": "c6508515-57b2-490f-af08-cdf3541dafb7", 00:08:10.357 "assigned_rate_limits": { 00:08:10.357 "rw_ios_per_sec": 0, 00:08:10.357 "rw_mbytes_per_sec": 0, 00:08:10.357 "r_mbytes_per_sec": 0, 00:08:10.357 "w_mbytes_per_sec": 0 00:08:10.357 }, 00:08:10.357 "claimed": false, 00:08:10.357 "zoned": false, 00:08:10.357 "supported_io_types": { 00:08:10.357 "read": true, 00:08:10.357 "write": true, 00:08:10.357 "unmap": true, 00:08:10.357 "flush": false, 00:08:10.357 "reset": true, 00:08:10.357 "nvme_admin": false, 00:08:10.357 "nvme_io": false, 00:08:10.357 "nvme_io_md": false, 00:08:10.357 "write_zeroes": true, 00:08:10.357 "zcopy": false, 00:08:10.357 "get_zone_info": false, 00:08:10.357 "zone_management": false, 00:08:10.357 "zone_append": false, 00:08:10.357 "compare": false, 00:08:10.357 "compare_and_write": false, 00:08:10.357 "abort": false, 00:08:10.357 "seek_hole": true, 00:08:10.357 "seek_data": true, 00:08:10.357 "copy": false, 00:08:10.357 "nvme_iov_md": false 00:08:10.357 }, 00:08:10.357 "driver_specific": { 00:08:10.357 "lvol": { 00:08:10.357 "lvol_store_uuid": "e2c41704-0dc8-4564-8e8e-6a29bf326993", 00:08:10.357 "base_bdev": "aio_bdev", 00:08:10.357 "thin_provision": false, 00:08:10.357 "num_allocated_clusters": 38, 00:08:10.357 "snapshot": false, 00:08:10.357 "clone": false, 00:08:10.357 "esnap_clone": false 00:08:10.357 } 00:08:10.357 } 00:08:10.357 } 00:08:10.357 ] 00:08:10.357 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:10.357 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:10.357 19:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:10.617 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:10.617 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:10.617 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:10.617 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:10.617 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6508515-57b2-490f-af08-cdf3541dafb7 00:08:10.875 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e2c41704-0dc8-4564-8e8e-6a29bf326993 00:08:11.134 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.394 00:08:11.394 real 0m15.523s 00:08:11.394 user 0m15.089s 00:08:11.394 sys 0m1.440s 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 ************************************ 00:08:11.394 END TEST lvs_grow_clean 00:08:11.394 ************************************ 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.394 19:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.394 ************************************ 00:08:11.394 START TEST lvs_grow_dirty 00:08:11.394 ************************************ 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.394 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.653 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.653 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.653 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:11.653 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:11.653 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.912 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.912 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.912 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 lvol 150 00:08:12.171 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:12.171 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.171 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.430 [2024-10-17 19:15:35.977494] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.430 [2024-10-17 19:15:35.977545] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.430 true 00:08:12.430 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:12.430 19:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.430 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.430 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.688 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:12.948 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.948 [2024-10-17 19:15:36.727740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1957490 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1957490 /var/tmp/bdevperf.sock 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1957490 ']' 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:13.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:13.226 19:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:13.226 [2024-10-17 19:15:36.956756] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:13.226 [2024-10-17 19:15:36.956805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1957490 ] 00:08:13.485 [2024-10-17 19:15:37.028070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.485 [2024-10-17 19:15:37.068017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.485 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.485 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:13.485 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:14.053 Nvme0n1 00:08:14.053 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:14.053 [ 00:08:14.053 { 00:08:14.053 "name": "Nvme0n1", 00:08:14.053 "aliases": [ 00:08:14.053 "b3095282-05d4-4424-91ed-f4a2ba9f998f" 00:08:14.053 ], 00:08:14.053 "product_name": "NVMe disk", 00:08:14.053 "block_size": 4096, 00:08:14.053 "num_blocks": 38912, 00:08:14.053 "uuid": "b3095282-05d4-4424-91ed-f4a2ba9f998f", 00:08:14.053 "numa_id": 1, 00:08:14.053 "assigned_rate_limits": { 00:08:14.053 "rw_ios_per_sec": 0, 00:08:14.053 "rw_mbytes_per_sec": 0, 00:08:14.053 "r_mbytes_per_sec": 0, 00:08:14.053 "w_mbytes_per_sec": 0 00:08:14.053 }, 00:08:14.053 "claimed": false, 00:08:14.053 "zoned": false, 00:08:14.053 "supported_io_types": { 00:08:14.053 "read": true, 00:08:14.053 "write": true, 00:08:14.053 "unmap": true, 00:08:14.053 "flush": true, 00:08:14.053 "reset": true, 00:08:14.053 "nvme_admin": true, 00:08:14.053 "nvme_io": true, 00:08:14.053 "nvme_io_md": false, 00:08:14.053 "write_zeroes": true, 00:08:14.053 "zcopy": false, 00:08:14.053 "get_zone_info": false, 00:08:14.053 "zone_management": false, 00:08:14.053 "zone_append": false, 00:08:14.053 "compare": true, 00:08:14.053 "compare_and_write": true, 00:08:14.053 "abort": true, 00:08:14.053 "seek_hole": false, 00:08:14.053 "seek_data": false, 00:08:14.053 "copy": true, 00:08:14.053 "nvme_iov_md": false 00:08:14.053 }, 00:08:14.053 "memory_domains": [ 00:08:14.053 { 00:08:14.053 "dma_device_id": "system", 00:08:14.053 "dma_device_type": 1 00:08:14.053 } 00:08:14.053 ], 00:08:14.053 "driver_specific": { 00:08:14.053 "nvme": [ 00:08:14.053 { 00:08:14.053 "trid": { 00:08:14.053 "trtype": "TCP", 00:08:14.053 "adrfam": "IPv4", 00:08:14.053 "traddr": "10.0.0.2", 00:08:14.053 "trsvcid": "4420", 00:08:14.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:14.053 }, 00:08:14.053 "ctrlr_data": { 00:08:14.053 "cntlid": 1, 00:08:14.053 "vendor_id": "0x8086", 00:08:14.053 "model_number": "SPDK bdev Controller", 00:08:14.053 "serial_number": "SPDK0", 00:08:14.053 "firmware_revision": "25.01", 00:08:14.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:14.053 "oacs": { 00:08:14.053 "security": 0, 00:08:14.053 "format": 0, 00:08:14.053 "firmware": 0, 00:08:14.053 "ns_manage": 0 00:08:14.053 }, 00:08:14.053 "multi_ctrlr": true, 00:08:14.053 "ana_reporting": false 00:08:14.053 }, 00:08:14.053 "vs": { 00:08:14.053 "nvme_version": "1.3" 00:08:14.053 }, 00:08:14.054 "ns_data": { 00:08:14.054 "id": 1, 00:08:14.054 "can_share": true 00:08:14.054 } 00:08:14.054 } 00:08:14.054 ], 00:08:14.054 "mp_policy": "active_passive" 00:08:14.054 } 00:08:14.054 } 00:08:14.054 ] 00:08:14.054 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1957720 00:08:14.054 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:14.054 19:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:14.054 Running I/O for 10 seconds... 00:08:15.433 Latency(us) 00:08:15.433 [2024-10-17T17:15:39.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.433 Nvme0n1 : 1.00 23522.00 91.88 0.00 0.00 0.00 0.00 0.00 00:08:15.433 [2024-10-17T17:15:39.217Z] =================================================================================================================== 00:08:15.433 [2024-10-17T17:15:39.217Z] Total : 23522.00 91.88 0.00 0.00 0.00 0.00 0.00 00:08:15.433 00:08:16.002 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:16.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.260 Nvme0n1 : 2.00 23705.00 92.60 0.00 0.00 0.00 0.00 0.00 00:08:16.260 [2024-10-17T17:15:40.044Z] =================================================================================================================== 00:08:16.260 [2024-10-17T17:15:40.044Z] Total : 23705.00 92.60 0.00 0.00 0.00 0.00 0.00 00:08:16.260 00:08:16.260 true 00:08:16.260 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:16.260 19:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.519 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.519 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.519 19:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1957720 00:08:17.087 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.087 Nvme0n1 : 3.00 23562.33 92.04 0.00 0.00 0.00 0.00 0.00 00:08:17.087 [2024-10-17T17:15:40.871Z] =================================================================================================================== 00:08:17.087 [2024-10-17T17:15:40.871Z] Total : 23562.33 92.04 0.00 0.00 0.00 0.00 0.00 00:08:17.087 00:08:18.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.465 Nvme0n1 : 4.00 23663.50 92.44 0.00 0.00 0.00 0.00 0.00 00:08:18.465 [2024-10-17T17:15:42.249Z] =================================================================================================================== 00:08:18.465 [2024-10-17T17:15:42.249Z] Total : 23663.50 92.44 0.00 0.00 0.00 0.00 0.00 00:08:18.465 00:08:19.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.402 Nvme0n1 : 5.00 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:08:19.402 [2024-10-17T17:15:43.186Z] =================================================================================================================== 00:08:19.402 [2024-10-17T17:15:43.186Z] Total : 23749.00 92.77 0.00 0.00 0.00 0.00 0.00 00:08:19.402 00:08:20.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.338 Nvme0n1 : 6.00 23790.83 92.93 0.00 0.00 0.00 0.00 0.00 00:08:20.338 [2024-10-17T17:15:44.123Z] =================================================================================================================== 00:08:20.339 [2024-10-17T17:15:44.123Z] Total : 23790.83 92.93 0.00 0.00 0.00 0.00 0.00 00:08:20.339 00:08:21.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.274 Nvme0n1 : 7.00 23828.29 93.08 0.00 0.00 0.00 0.00 0.00 00:08:21.274 [2024-10-17T17:15:45.058Z] =================================================================================================================== 00:08:21.274 [2024-10-17T17:15:45.058Z] Total : 23828.29 93.08 0.00 0.00 0.00 0.00 0.00 00:08:21.274 00:08:22.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.211 Nvme0n1 : 8.00 23859.88 93.20 0.00 0.00 0.00 0.00 0.00 00:08:22.211 [2024-10-17T17:15:45.995Z] =================================================================================================================== 00:08:22.211 [2024-10-17T17:15:45.995Z] Total : 23859.88 93.20 0.00 0.00 0.00 0.00 0.00 00:08:22.211 00:08:23.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.147 Nvme0n1 : 9.00 23887.89 93.31 0.00 0.00 0.00 0.00 0.00 00:08:23.147 [2024-10-17T17:15:46.931Z] =================================================================================================================== 00:08:23.147 [2024-10-17T17:15:46.931Z] Total : 23887.89 93.31 0.00 0.00 0.00 0.00 0.00 00:08:23.147 00:08:24.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.082 Nvme0n1 : 10.00 23902.60 93.37 0.00 0.00 0.00 0.00 0.00 00:08:24.082 [2024-10-17T17:15:47.866Z] =================================================================================================================== 00:08:24.082 [2024-10-17T17:15:47.866Z] Total : 23902.60 93.37 0.00 0.00 0.00 0.00 0.00 00:08:24.082 00:08:24.082 00:08:24.082 Latency(us) 00:08:24.082 [2024-10-17T17:15:47.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.082 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.082 Nvme0n1 : 10.00 23906.61 93.39 0.00 0.00 5351.14 2590.23 10485.76 00:08:24.082 [2024-10-17T17:15:47.866Z] =================================================================================================================== 00:08:24.082 [2024-10-17T17:15:47.866Z] Total : 23906.61 93.39 0.00 0.00 5351.14 2590.23 10485.76 00:08:24.082 { 00:08:24.082 "results": [ 00:08:24.082 { 00:08:24.082 "job": "Nvme0n1", 00:08:24.082 "core_mask": "0x2", 00:08:24.082 "workload": "randwrite", 00:08:24.082 "status": "finished", 00:08:24.082 "queue_depth": 128, 00:08:24.082 "io_size": 4096, 00:08:24.082 "runtime": 10.003675, 00:08:24.082 "iops": 23906.61431923768, 00:08:24.082 "mibps": 93.38521218452219, 00:08:24.082 "io_failed": 0, 00:08:24.082 "io_timeout": 0, 00:08:24.082 "avg_latency_us": 5351.136199675284, 00:08:24.082 "min_latency_us": 2590.232380952381, 00:08:24.082 "max_latency_us": 10485.76 00:08:24.082 } 00:08:24.082 ], 00:08:24.082 "core_count": 1 00:08:24.082 } 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1957490 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1957490 ']' 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1957490 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1957490 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1957490' 00:08:24.341 killing process with pid 1957490 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1957490 00:08:24.341 Received shutdown signal, test time was about 10.000000 seconds 00:08:24.341 00:08:24.341 Latency(us) 00:08:24.341 [2024-10-17T17:15:48.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.341 [2024-10-17T17:15:48.125Z] =================================================================================================================== 00:08:24.341 [2024-10-17T17:15:48.125Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:24.341 19:15:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1957490 00:08:24.341 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.610 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.868 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:24.868 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1954389 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1954389 00:08:25.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1954389 Killed "${NVMF_APP[@]}" "$@" 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1959532 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1959532 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1959532 ']' 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.127 19:15:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.127 [2024-10-17 19:15:48.812926] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:25.127 [2024-10-17 19:15:48.812971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.127 [2024-10-17 19:15:48.892822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.387 [2024-10-17 19:15:48.933277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.387 [2024-10-17 19:15:48.933310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.387 [2024-10-17 19:15:48.933317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.387 [2024-10-17 19:15:48.933323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.387 [2024-10-17 19:15:48.933328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.387 [2024-10-17 19:15:48.933910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.387 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.646 [2024-10-17 19:15:49.230652] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:25.646 [2024-10-17 19:15:49.230731] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:25.646 [2024-10-17 19:15:49.230756] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:25.646 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:25.646 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:25.646 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:25.647 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:25.647 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:25.647 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:25.647 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:25.647 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.906 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3095282-05d4-4424-91ed-f4a2ba9f998f -t 2000 00:08:25.906 [ 00:08:25.906 { 00:08:25.906 "name": "b3095282-05d4-4424-91ed-f4a2ba9f998f", 00:08:25.906 "aliases": [ 00:08:25.906 "lvs/lvol" 00:08:25.906 ], 00:08:25.906 "product_name": "Logical Volume", 00:08:25.906 "block_size": 4096, 00:08:25.906 "num_blocks": 38912, 00:08:25.906 "uuid": "b3095282-05d4-4424-91ed-f4a2ba9f998f", 00:08:25.906 "assigned_rate_limits": { 00:08:25.906 "rw_ios_per_sec": 0, 00:08:25.906 "rw_mbytes_per_sec": 0, 00:08:25.906 "r_mbytes_per_sec": 0, 00:08:25.906 "w_mbytes_per_sec": 0 00:08:25.906 }, 00:08:25.906 "claimed": false, 00:08:25.906 "zoned": false, 00:08:25.906 "supported_io_types": { 00:08:25.906 "read": true, 00:08:25.906 "write": true, 00:08:25.906 "unmap": true, 00:08:25.906 "flush": false, 00:08:25.906 "reset": true, 00:08:25.906 "nvme_admin": false, 00:08:25.906 "nvme_io": false, 00:08:25.906 "nvme_io_md": false, 00:08:25.906 "write_zeroes": true, 00:08:25.906 "zcopy": false, 00:08:25.906 "get_zone_info": false, 00:08:25.906 "zone_management": false, 00:08:25.906 "zone_append": false, 00:08:25.906 "compare": false, 00:08:25.906 "compare_and_write": false, 00:08:25.906 "abort": false, 00:08:25.906 "seek_hole": true, 00:08:25.906 "seek_data": true, 00:08:25.906 "copy": false, 00:08:25.906 "nvme_iov_md": false 00:08:25.906 }, 00:08:25.906 "driver_specific": { 00:08:25.906 "lvol": { 00:08:25.906 "lvol_store_uuid": "4bf531dc-fe8c-4f5b-a053-679da52ca1d2", 00:08:25.906 "base_bdev": "aio_bdev", 00:08:25.906 "thin_provision": false, 00:08:25.906 "num_allocated_clusters": 38, 00:08:25.906 "snapshot": false, 00:08:25.906 "clone": false, 00:08:25.906 "esnap_clone": false 00:08:25.906 } 00:08:25.906 } 00:08:25.906 } 00:08:25.906 ] 00:08:25.906 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:25.906 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:25.906 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:26.165 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:26.165 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:26.165 19:15:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:26.424 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:26.424 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.424 [2024-10-17 19:15:50.183701] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:26.683 request: 00:08:26.683 { 00:08:26.683 "uuid": "4bf531dc-fe8c-4f5b-a053-679da52ca1d2", 00:08:26.683 "method": "bdev_lvol_get_lvstores", 00:08:26.683 "req_id": 1 00:08:26.683 } 00:08:26.683 Got JSON-RPC error response 00:08:26.683 response: 00:08:26.683 { 00:08:26.683 "code": -19, 00:08:26.683 "message": "No such device" 00:08:26.683 } 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:26.683 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.943 aio_bdev 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:26.943 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.203 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b3095282-05d4-4424-91ed-f4a2ba9f998f -t 2000 00:08:27.203 [ 00:08:27.203 { 00:08:27.203 "name": "b3095282-05d4-4424-91ed-f4a2ba9f998f", 00:08:27.203 "aliases": [ 00:08:27.203 "lvs/lvol" 00:08:27.203 ], 00:08:27.203 "product_name": "Logical Volume", 00:08:27.203 "block_size": 4096, 00:08:27.203 "num_blocks": 38912, 00:08:27.203 "uuid": "b3095282-05d4-4424-91ed-f4a2ba9f998f", 00:08:27.203 "assigned_rate_limits": { 00:08:27.203 "rw_ios_per_sec": 0, 00:08:27.203 "rw_mbytes_per_sec": 0, 00:08:27.203 "r_mbytes_per_sec": 0, 00:08:27.203 "w_mbytes_per_sec": 0 00:08:27.203 }, 00:08:27.203 "claimed": false, 00:08:27.203 "zoned": false, 00:08:27.203 "supported_io_types": { 00:08:27.203 "read": true, 00:08:27.203 "write": true, 00:08:27.203 "unmap": true, 00:08:27.203 "flush": false, 00:08:27.203 "reset": true, 00:08:27.203 "nvme_admin": false, 00:08:27.203 "nvme_io": false, 00:08:27.203 "nvme_io_md": false, 00:08:27.203 "write_zeroes": true, 00:08:27.203 "zcopy": false, 00:08:27.203 "get_zone_info": false, 00:08:27.203 "zone_management": false, 00:08:27.203 "zone_append": false, 00:08:27.203 "compare": false, 00:08:27.203 "compare_and_write": false, 00:08:27.203 "abort": false, 00:08:27.203 "seek_hole": true, 00:08:27.203 "seek_data": true, 00:08:27.203 "copy": false, 00:08:27.203 "nvme_iov_md": false 00:08:27.203 }, 00:08:27.203 "driver_specific": { 00:08:27.203 "lvol": { 00:08:27.203 "lvol_store_uuid": "4bf531dc-fe8c-4f5b-a053-679da52ca1d2", 00:08:27.203 "base_bdev": "aio_bdev", 00:08:27.203 "thin_provision": false, 00:08:27.203 "num_allocated_clusters": 38, 00:08:27.203 "snapshot": false, 00:08:27.203 "clone": false, 00:08:27.203 "esnap_clone": false 00:08:27.203 } 00:08:27.203 } 00:08:27.203 } 00:08:27.203 ] 00:08:27.461 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:08:27.461 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:27.461 19:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:27.461 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:27.461 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:27.461 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:27.720 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:27.720 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b3095282-05d4-4424-91ed-f4a2ba9f998f 00:08:27.980 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bf531dc-fe8c-4f5b-a053-679da52ca1d2 00:08:28.240 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.240 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:28.240 00:08:28.240 real 0m16.951s 00:08:28.240 user 0m43.736s 00:08:28.240 sys 0m3.707s 00:08:28.240 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.240 19:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.240 ************************************ 00:08:28.240 END TEST lvs_grow_dirty 00:08:28.240 ************************************ 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:08:28.240 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:28.240 nvmf_trace.0 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:28.499 rmmod nvme_tcp 00:08:28.499 rmmod nvme_fabrics 00:08:28.499 rmmod nvme_keyring 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1959532 ']' 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1959532 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1959532 ']' 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1959532 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1959532 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1959532' 00:08:28.499 killing process with pid 1959532 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1959532 00:08:28.499 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1959532 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.759 19:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:30.665 00:08:30.665 real 0m41.751s 00:08:30.665 user 1m4.430s 00:08:30.665 sys 0m10.095s 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.665 ************************************ 00:08:30.665 END TEST nvmf_lvs_grow 00:08:30.665 ************************************ 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.665 19:15:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.925 ************************************ 00:08:30.925 START TEST nvmf_bdev_io_wait 00:08:30.925 ************************************ 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:30.925 * Looking for test storage... 00:08:30.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:30.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.925 --rc genhtml_branch_coverage=1 00:08:30.925 --rc genhtml_function_coverage=1 00:08:30.925 --rc genhtml_legend=1 00:08:30.925 --rc geninfo_all_blocks=1 00:08:30.925 --rc geninfo_unexecuted_blocks=1 00:08:30.925 00:08:30.925 ' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:30.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.925 --rc genhtml_branch_coverage=1 00:08:30.925 --rc genhtml_function_coverage=1 00:08:30.925 --rc genhtml_legend=1 00:08:30.925 --rc geninfo_all_blocks=1 00:08:30.925 --rc geninfo_unexecuted_blocks=1 00:08:30.925 00:08:30.925 ' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:30.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.925 --rc genhtml_branch_coverage=1 00:08:30.925 --rc genhtml_function_coverage=1 00:08:30.925 --rc genhtml_legend=1 00:08:30.925 --rc geninfo_all_blocks=1 00:08:30.925 --rc geninfo_unexecuted_blocks=1 00:08:30.925 00:08:30.925 ' 00:08:30.925 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:30.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.926 --rc genhtml_branch_coverage=1 00:08:30.926 --rc genhtml_function_coverage=1 00:08:30.926 --rc genhtml_legend=1 00:08:30.926 --rc geninfo_all_blocks=1 00:08:30.926 --rc geninfo_unexecuted_blocks=1 00:08:30.926 00:08:30.926 ' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.926 19:15:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:37.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:37.502 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:37.502 Found net devices under 0000:86:00.0: cvl_0_0 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:37.502 Found net devices under 0000:86:00.1: cvl_0_1 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:37.502 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:37.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:37.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:08:37.503 00:08:37.503 --- 10.0.0.2 ping statistics --- 00:08:37.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.503 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:37.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:37.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:08:37.503 00:08:37.503 --- 10.0.0.1 ping statistics --- 00:08:37.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:37.503 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1963641 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1963641 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1963641 ']' 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 [2024-10-17 19:16:00.767235] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:37.503 [2024-10-17 19:16:00.767277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.503 [2024-10-17 19:16:00.846378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:37.503 [2024-10-17 19:16:00.889481] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.503 [2024-10-17 19:16:00.889519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.503 [2024-10-17 19:16:00.889526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.503 [2024-10-17 19:16:00.889531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.503 [2024-10-17 19:16:00.889536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.503 [2024-10-17 19:16:00.891044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.503 [2024-10-17 19:16:00.891160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.503 [2024-10-17 19:16:00.891265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.503 [2024-10-17 19:16:00.891267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 [2024-10-17 19:16:01.023106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 Malloc0 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:37.503 [2024-10-17 19:16:01.078405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1963667 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1963669 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.503 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.504 { 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme$subsystem", 00:08:37.504 "trtype": "$TEST_TRANSPORT", 00:08:37.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "$NVMF_PORT", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.504 "hdgst": ${hdgst:-false}, 00:08:37.504 "ddgst": ${ddgst:-false} 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 } 00:08:37.504 EOF 00:08:37.504 )") 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1963671 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.504 { 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme$subsystem", 00:08:37.504 "trtype": "$TEST_TRANSPORT", 00:08:37.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "$NVMF_PORT", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.504 "hdgst": ${hdgst:-false}, 00:08:37.504 "ddgst": ${ddgst:-false} 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 } 00:08:37.504 EOF 00:08:37.504 )") 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1963675 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.504 { 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme$subsystem", 00:08:37.504 "trtype": "$TEST_TRANSPORT", 00:08:37.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "$NVMF_PORT", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.504 "hdgst": ${hdgst:-false}, 00:08:37.504 "ddgst": ${ddgst:-false} 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 } 00:08:37.504 EOF 00:08:37.504 )") 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:37.504 { 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme$subsystem", 00:08:37.504 "trtype": "$TEST_TRANSPORT", 00:08:37.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "$NVMF_PORT", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.504 "hdgst": ${hdgst:-false}, 00:08:37.504 "ddgst": ${ddgst:-false} 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 } 00:08:37.504 EOF 00:08:37.504 )") 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1963667 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme1", 00:08:37.504 "trtype": "tcp", 00:08:37.504 "traddr": "10.0.0.2", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "4420", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.504 "hdgst": false, 00:08:37.504 "ddgst": false 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 }' 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme1", 00:08:37.504 "trtype": "tcp", 00:08:37.504 "traddr": "10.0.0.2", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "4420", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.504 "hdgst": false, 00:08:37.504 "ddgst": false 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 }' 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme1", 00:08:37.504 "trtype": "tcp", 00:08:37.504 "traddr": "10.0.0.2", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "4420", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.504 "hdgst": false, 00:08:37.504 "ddgst": false 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 }' 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:08:37.504 19:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:37.504 "params": { 00:08:37.504 "name": "Nvme1", 00:08:37.504 "trtype": "tcp", 00:08:37.504 "traddr": "10.0.0.2", 00:08:37.504 "adrfam": "ipv4", 00:08:37.504 "trsvcid": "4420", 00:08:37.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.504 "hdgst": false, 00:08:37.504 "ddgst": false 00:08:37.504 }, 00:08:37.504 "method": "bdev_nvme_attach_controller" 00:08:37.504 }' 00:08:37.504 [2024-10-17 19:16:01.130622] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:37.504 [2024-10-17 19:16:01.130669] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:37.504 [2024-10-17 19:16:01.133552] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:37.505 [2024-10-17 19:16:01.133607] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:37.505 [2024-10-17 19:16:01.135476] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:37.505 [2024-10-17 19:16:01.135519] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:37.505 [2024-10-17 19:16:01.136165] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:37.505 [2024-10-17 19:16:01.136207] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:37.764 [2024-10-17 19:16:01.321282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.764 [2024-10-17 19:16:01.363597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:37.764 [2024-10-17 19:16:01.414126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.764 [2024-10-17 19:16:01.465123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.764 [2024-10-17 19:16:01.466869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:37.764 [2024-10-17 19:16:01.505291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:37.764 [2024-10-17 19:16:01.521820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.022 [2024-10-17 19:16:01.564004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:38.023 Running I/O for 1 seconds... 00:08:38.023 Running I/O for 1 seconds... 00:08:38.023 Running I/O for 1 seconds... 00:08:38.282 Running I/O for 1 seconds... 00:08:39.124 9212.00 IOPS, 35.98 MiB/s 00:08:39.124 Latency(us) 00:08:39.124 [2024-10-17T17:16:02.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.124 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:39.124 Nvme1n1 : 1.02 9184.05 35.88 0.00 0.00 13842.54 6366.35 23717.79 00:08:39.124 [2024-10-17T17:16:02.908Z] =================================================================================================================== 00:08:39.124 [2024-10-17T17:16:02.908Z] Total : 9184.05 35.88 0.00 0.00 13842.54 6366.35 23717.79 00:08:39.124 252184.00 IOPS, 985.09 MiB/s 00:08:39.124 Latency(us) 00:08:39.124 [2024-10-17T17:16:02.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.124 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:39.124 Nvme1n1 : 1.00 251798.69 983.59 0.00 0.00 505.34 224.30 1521.37 00:08:39.124 [2024-10-17T17:16:02.908Z] =================================================================================================================== 00:08:39.124 [2024-10-17T17:16:02.908Z] Total : 251798.69 983.59 0.00 0.00 505.34 224.30 1521.37 00:08:39.124 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1963669 00:08:39.124 8462.00 IOPS, 33.05 MiB/s 00:08:39.124 Latency(us) 00:08:39.124 [2024-10-17T17:16:02.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.124 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:39.124 Nvme1n1 : 1.01 8563.41 33.45 0.00 0.00 14912.01 3526.46 27462.70 00:08:39.124 [2024-10-17T17:16:02.908Z] =================================================================================================================== 00:08:39.124 [2024-10-17T17:16:02.908Z] Total : 8563.41 33.45 0.00 0.00 14912.01 3526.46 27462.70 00:08:39.124 10743.00 IOPS, 41.96 MiB/s 00:08:39.124 Latency(us) 00:08:39.124 [2024-10-17T17:16:02.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.124 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:39.124 Nvme1n1 : 1.01 10812.88 42.24 0.00 0.00 11802.22 4431.48 22843.98 00:08:39.124 [2024-10-17T17:16:02.908Z] =================================================================================================================== 00:08:39.124 [2024-10-17T17:16:02.908Z] Total : 10812.88 42.24 0.00 0.00 11802.22 4431.48 22843.98 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1963671 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1963675 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:39.403 19:16:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:39.403 rmmod nvme_tcp 00:08:39.403 rmmod nvme_fabrics 00:08:39.403 rmmod nvme_keyring 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1963641 ']' 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1963641 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1963641 ']' 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1963641 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1963641 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1963641' 00:08:39.403 killing process with pid 1963641 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1963641 00:08:39.403 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1963641 00:08:39.727 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.728 19:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.633 00:08:41.633 real 0m10.853s 00:08:41.633 user 0m16.516s 00:08:41.633 sys 0m6.102s 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:41.633 ************************************ 00:08:41.633 END TEST nvmf_bdev_io_wait 00:08:41.633 ************************************ 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.633 19:16:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.633 ************************************ 00:08:41.633 START TEST nvmf_queue_depth 00:08:41.633 ************************************ 00:08:41.634 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:41.893 * Looking for test storage... 00:08:41.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:41.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.893 --rc genhtml_branch_coverage=1 00:08:41.893 --rc genhtml_function_coverage=1 00:08:41.893 --rc genhtml_legend=1 00:08:41.893 --rc geninfo_all_blocks=1 00:08:41.893 --rc geninfo_unexecuted_blocks=1 00:08:41.893 00:08:41.893 ' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:41.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.893 --rc genhtml_branch_coverage=1 00:08:41.893 --rc genhtml_function_coverage=1 00:08:41.893 --rc genhtml_legend=1 00:08:41.893 --rc geninfo_all_blocks=1 00:08:41.893 --rc geninfo_unexecuted_blocks=1 00:08:41.893 00:08:41.893 ' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:41.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.893 --rc genhtml_branch_coverage=1 00:08:41.893 --rc genhtml_function_coverage=1 00:08:41.893 --rc genhtml_legend=1 00:08:41.893 --rc geninfo_all_blocks=1 00:08:41.893 --rc geninfo_unexecuted_blocks=1 00:08:41.893 00:08:41.893 ' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:41.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.893 --rc genhtml_branch_coverage=1 00:08:41.893 --rc genhtml_function_coverage=1 00:08:41.893 --rc genhtml_legend=1 00:08:41.893 --rc geninfo_all_blocks=1 00:08:41.893 --rc geninfo_unexecuted_blocks=1 00:08:41.893 00:08:41.893 ' 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.893 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.894 19:16:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:48.466 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:48.466 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.466 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:48.467 Found net devices under 0000:86:00.0: cvl_0_0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:48.467 Found net devices under 0000:86:00.1: cvl_0_1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:48.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:08:48.467 00:08:48.467 --- 10.0.0.2 ping statistics --- 00:08:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.467 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:08:48.467 00:08:48.467 --- 10.0.0.1 ping statistics --- 00:08:48.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.467 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1967681 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1967681 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1967681 ']' 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.467 [2024-10-17 19:16:11.663657] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:48.467 [2024-10-17 19:16:11.663707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.467 [2024-10-17 19:16:11.746548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.467 [2024-10-17 19:16:11.787198] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.467 [2024-10-17 19:16:11.787232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.467 [2024-10-17 19:16:11.787239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.467 [2024-10-17 19:16:11.787245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.467 [2024-10-17 19:16:11.787250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.467 [2024-10-17 19:16:11.787810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.467 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 [2024-10-17 19:16:11.926457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 Malloc0 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 [2024-10-17 19:16:11.976619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1967709 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1967709 /var/tmp/bdevperf.sock 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1967709 ']' 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.468 19:16:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.468 [2024-10-17 19:16:12.026789] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:08:48.468 [2024-10-17 19:16:12.026828] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967709 ] 00:08:48.468 [2024-10-17 19:16:12.100844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.468 [2024-10-17 19:16:12.141311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.468 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.468 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:48.468 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:48.468 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.468 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:48.727 NVMe0n1 00:08:48.727 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.727 19:16:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:48.727 Running I/O for 10 seconds... 00:08:51.041 11992.00 IOPS, 46.84 MiB/s [2024-10-17T17:16:15.760Z] 12204.50 IOPS, 47.67 MiB/s [2024-10-17T17:16:16.698Z] 12265.33 IOPS, 47.91 MiB/s [2024-10-17T17:16:17.636Z] 12282.00 IOPS, 47.98 MiB/s [2024-10-17T17:16:18.573Z] 12341.40 IOPS, 48.21 MiB/s [2024-10-17T17:16:19.510Z] 12435.50 IOPS, 48.58 MiB/s [2024-10-17T17:16:20.446Z] 12430.14 IOPS, 48.56 MiB/s [2024-10-17T17:16:21.823Z] 12499.62 IOPS, 48.83 MiB/s [2024-10-17T17:16:22.760Z] 12496.78 IOPS, 48.82 MiB/s [2024-10-17T17:16:22.760Z] 12517.90 IOPS, 48.90 MiB/s 00:08:58.976 Latency(us) 00:08:58.976 [2024-10-17T17:16:22.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.976 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:58.976 Verification LBA range: start 0x0 length 0x4000 00:08:58.976 NVMe0n1 : 10.05 12540.36 48.99 0.00 0.00 81364.58 14605.17 53926.77 00:08:58.976 [2024-10-17T17:16:22.760Z] =================================================================================================================== 00:08:58.976 [2024-10-17T17:16:22.760Z] Total : 12540.36 48.99 0.00 0.00 81364.58 14605.17 53926.77 00:08:58.976 { 00:08:58.976 "results": [ 00:08:58.976 { 00:08:58.976 "job": "NVMe0n1", 00:08:58.976 "core_mask": "0x1", 00:08:58.976 "workload": "verify", 00:08:58.976 "status": "finished", 00:08:58.976 "verify_range": { 00:08:58.976 "start": 0, 00:08:58.976 "length": 16384 00:08:58.976 }, 00:08:58.976 "queue_depth": 1024, 00:08:58.976 "io_size": 4096, 00:08:58.976 "runtime": 10.054175, 00:08:58.976 "iops": 12540.362585692013, 00:08:58.976 "mibps": 48.98579135035943, 00:08:58.976 "io_failed": 0, 00:08:58.976 "io_timeout": 0, 00:08:58.976 "avg_latency_us": 81364.58492281163, 00:08:58.976 "min_latency_us": 14605.165714285715, 00:08:58.976 "max_latency_us": 53926.76571428571 00:08:58.976 } 00:08:58.976 ], 00:08:58.976 "core_count": 1 00:08:58.976 } 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1967709 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1967709 ']' 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1967709 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1967709 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1967709' 00:08:58.976 killing process with pid 1967709 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1967709 00:08:58.976 Received shutdown signal, test time was about 10.000000 seconds 00:08:58.976 00:08:58.976 Latency(us) 00:08:58.976 [2024-10-17T17:16:22.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.976 [2024-10-17T17:16:22.760Z] =================================================================================================================== 00:08:58.976 [2024-10-17T17:16:22.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1967709 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.976 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.976 rmmod nvme_tcp 00:08:58.976 rmmod nvme_fabrics 00:08:59.235 rmmod nvme_keyring 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1967681 ']' 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1967681 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1967681 ']' 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1967681 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1967681 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1967681' 00:08:59.235 killing process with pid 1967681 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1967681 00:08:59.235 19:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1967681 00:08:59.235 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:59.235 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:08:59.236 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:08:59.236 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:59.236 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:08:59.236 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:08:59.236 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:08:59.509 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.509 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.509 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.509 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.509 19:16:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:01.413 00:09:01.413 real 0m19.685s 00:09:01.413 user 0m22.938s 00:09:01.413 sys 0m6.078s 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 ************************************ 00:09:01.413 END TEST nvmf_queue_depth 00:09:01.413 ************************************ 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.413 ************************************ 00:09:01.413 START TEST nvmf_target_multipath 00:09:01.413 ************************************ 00:09:01.413 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:01.673 * Looking for test storage... 00:09:01.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:01.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.673 --rc genhtml_branch_coverage=1 00:09:01.673 --rc genhtml_function_coverage=1 00:09:01.673 --rc genhtml_legend=1 00:09:01.673 --rc geninfo_all_blocks=1 00:09:01.673 --rc geninfo_unexecuted_blocks=1 00:09:01.673 00:09:01.673 ' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:01.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.673 --rc genhtml_branch_coverage=1 00:09:01.673 --rc genhtml_function_coverage=1 00:09:01.673 --rc genhtml_legend=1 00:09:01.673 --rc geninfo_all_blocks=1 00:09:01.673 --rc geninfo_unexecuted_blocks=1 00:09:01.673 00:09:01.673 ' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:01.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.673 --rc genhtml_branch_coverage=1 00:09:01.673 --rc genhtml_function_coverage=1 00:09:01.673 --rc genhtml_legend=1 00:09:01.673 --rc geninfo_all_blocks=1 00:09:01.673 --rc geninfo_unexecuted_blocks=1 00:09:01.673 00:09:01.673 ' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:01.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.673 --rc genhtml_branch_coverage=1 00:09:01.673 --rc genhtml_function_coverage=1 00:09:01.673 --rc genhtml_legend=1 00:09:01.673 --rc geninfo_all_blocks=1 00:09:01.673 --rc geninfo_unexecuted_blocks=1 00:09:01.673 00:09:01.673 ' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.673 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:01.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:01.674 19:16:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:08.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.247 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:08.248 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:08.248 Found net devices under 0000:86:00.0: cvl_0_0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:08.248 Found net devices under 0000:86:00.1: cvl_0_1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:09:08.248 00:09:08.248 --- 10.0.0.2 ping statistics --- 00:09:08.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.248 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:09:08.248 00:09:08.248 --- 10.0.0.1 ping statistics --- 00:09:08.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.248 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:08.248 only one NIC for nvmf test 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.248 rmmod nvme_tcp 00:09:08.248 rmmod nvme_fabrics 00:09:08.248 rmmod nvme_keyring 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.248 19:16:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.155 00:09:10.155 real 0m8.409s 00:09:10.155 user 0m1.818s 00:09:10.155 sys 0m4.588s 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:10.155 ************************************ 00:09:10.155 END TEST nvmf_target_multipath 00:09:10.155 ************************************ 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.155 ************************************ 00:09:10.155 START TEST nvmf_zcopy 00:09:10.155 ************************************ 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:10.155 * Looking for test storage... 00:09:10.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:10.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.155 --rc genhtml_branch_coverage=1 00:09:10.155 --rc genhtml_function_coverage=1 00:09:10.155 --rc genhtml_legend=1 00:09:10.155 --rc geninfo_all_blocks=1 00:09:10.155 --rc geninfo_unexecuted_blocks=1 00:09:10.155 00:09:10.155 ' 00:09:10.155 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:10.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.155 --rc genhtml_branch_coverage=1 00:09:10.155 --rc genhtml_function_coverage=1 00:09:10.155 --rc genhtml_legend=1 00:09:10.155 --rc geninfo_all_blocks=1 00:09:10.155 --rc geninfo_unexecuted_blocks=1 00:09:10.155 00:09:10.155 ' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.156 --rc genhtml_branch_coverage=1 00:09:10.156 --rc genhtml_function_coverage=1 00:09:10.156 --rc genhtml_legend=1 00:09:10.156 --rc geninfo_all_blocks=1 00:09:10.156 --rc geninfo_unexecuted_blocks=1 00:09:10.156 00:09:10.156 ' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:10.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.156 --rc genhtml_branch_coverage=1 00:09:10.156 --rc genhtml_function_coverage=1 00:09:10.156 --rc genhtml_legend=1 00:09:10.156 --rc geninfo_all_blocks=1 00:09:10.156 --rc geninfo_unexecuted_blocks=1 00:09:10.156 00:09:10.156 ' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.156 19:16:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.728 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:16.728 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:16.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:16.729 Found net devices under 0000:86:00.0: cvl_0_0 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:16.729 Found net devices under 0000:86:00.1: cvl_0_1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:09:16.729 00:09:16.729 --- 10.0.0.2 ping statistics --- 00:09:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.729 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:09:16.729 00:09:16.729 --- 10.0.0.1 ping statistics --- 00:09:16.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.729 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1976604 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1976604 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1976604 ']' 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.729 19:16:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.729 [2024-10-17 19:16:39.879093] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:09:16.729 [2024-10-17 19:16:39.879144] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.729 [2024-10-17 19:16:39.958055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.729 [2024-10-17 19:16:39.996794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.729 [2024-10-17 19:16:39.996827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.729 [2024-10-17 19:16:39.996833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.729 [2024-10-17 19:16:39.996839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.729 [2024-10-17 19:16:39.996844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.729 [2024-10-17 19:16:39.997381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.988 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.988 [2024-10-17 19:16:40.770003] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.248 [2024-10-17 19:16:40.790200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.248 malloc0 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:17.248 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:17.248 { 00:09:17.248 "params": { 00:09:17.248 "name": "Nvme$subsystem", 00:09:17.248 "trtype": "$TEST_TRANSPORT", 00:09:17.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.248 "adrfam": "ipv4", 00:09:17.248 "trsvcid": "$NVMF_PORT", 00:09:17.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.248 "hdgst": ${hdgst:-false}, 00:09:17.248 "ddgst": ${ddgst:-false} 00:09:17.248 }, 00:09:17.248 "method": "bdev_nvme_attach_controller" 00:09:17.248 } 00:09:17.248 EOF 00:09:17.248 )") 00:09:17.249 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:17.249 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:17.249 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:17.249 19:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:17.249 "params": { 00:09:17.249 "name": "Nvme1", 00:09:17.249 "trtype": "tcp", 00:09:17.249 "traddr": "10.0.0.2", 00:09:17.249 "adrfam": "ipv4", 00:09:17.249 "trsvcid": "4420", 00:09:17.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.249 "hdgst": false, 00:09:17.249 "ddgst": false 00:09:17.249 }, 00:09:17.249 "method": "bdev_nvme_attach_controller" 00:09:17.249 }' 00:09:17.249 [2024-10-17 19:16:40.873479] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:09:17.249 [2024-10-17 19:16:40.873523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976851 ] 00:09:17.249 [2024-10-17 19:16:40.946322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.249 [2024-10-17 19:16:40.987078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.507 Running I/O for 10 seconds... 00:09:19.821 8663.00 IOPS, 67.68 MiB/s [2024-10-17T17:16:44.540Z] 8722.00 IOPS, 68.14 MiB/s [2024-10-17T17:16:45.478Z] 8754.67 IOPS, 68.40 MiB/s [2024-10-17T17:16:46.413Z] 8767.25 IOPS, 68.49 MiB/s [2024-10-17T17:16:47.347Z] 8774.60 IOPS, 68.55 MiB/s [2024-10-17T17:16:48.283Z] 8782.00 IOPS, 68.61 MiB/s [2024-10-17T17:16:49.221Z] 8791.29 IOPS, 68.68 MiB/s [2024-10-17T17:16:50.600Z] 8794.25 IOPS, 68.71 MiB/s [2024-10-17T17:16:51.538Z] 8790.67 IOPS, 68.68 MiB/s [2024-10-17T17:16:51.538Z] 8785.90 IOPS, 68.64 MiB/s 00:09:27.754 Latency(us) 00:09:27.754 [2024-10-17T17:16:51.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.754 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:27.754 Verification LBA range: start 0x0 length 0x1000 00:09:27.754 Nvme1n1 : 10.01 8787.07 68.65 0.00 0.00 14526.09 1942.67 22219.82 00:09:27.754 [2024-10-17T17:16:51.538Z] =================================================================================================================== 00:09:27.754 [2024-10-17T17:16:51.538Z] Total : 8787.07 68.65 0.00 0.00 14526.09 1942.67 22219.82 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1978474 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:09:27.754 { 00:09:27.754 "params": { 00:09:27.754 "name": "Nvme$subsystem", 00:09:27.754 "trtype": "$TEST_TRANSPORT", 00:09:27.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:27.754 "adrfam": "ipv4", 00:09:27.754 "trsvcid": "$NVMF_PORT", 00:09:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:27.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:27.754 "hdgst": ${hdgst:-false}, 00:09:27.754 "ddgst": ${ddgst:-false} 00:09:27.754 }, 00:09:27.754 "method": "bdev_nvme_attach_controller" 00:09:27.754 } 00:09:27.754 EOF 00:09:27.754 )") 00:09:27.754 [2024-10-17 19:16:51.384519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.384552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:09:27.754 19:16:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:09:27.754 "params": { 00:09:27.754 "name": "Nvme1", 00:09:27.754 "trtype": "tcp", 00:09:27.754 "traddr": "10.0.0.2", 00:09:27.754 "adrfam": "ipv4", 00:09:27.754 "trsvcid": "4420", 00:09:27.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:27.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:27.754 "hdgst": false, 00:09:27.754 "ddgst": false 00:09:27.754 }, 00:09:27.754 "method": "bdev_nvme_attach_controller" 00:09:27.754 }' 00:09:27.754 [2024-10-17 19:16:51.396512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.396524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.408540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.408551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.420570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.420579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.426061] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:09:27.754 [2024-10-17 19:16:51.426112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1978474 ] 00:09:27.754 [2024-10-17 19:16:51.432609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.432620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.444636] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.444647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.456671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.456680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.468705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.468716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.480735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.480745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.492770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.492782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.503138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.754 [2024-10-17 19:16:51.504799] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.504812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.516837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.516852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.754 [2024-10-17 19:16:51.528865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.754 [2024-10-17 19:16:51.528876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.540900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.540911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.543999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.014 [2024-10-17 19:16:51.552928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.552940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.564969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.564990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.576998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.577013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.589031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.589043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.601061] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.601074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.613089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.613100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.625119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.625128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.637174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.637195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.649198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.649212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.661227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.661240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.673256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.673266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.685286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.685297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.697325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.697339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.709354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.709367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.721388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.721398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.733421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.733431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.745450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.745459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.757490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.014 [2024-10-17 19:16:51.757504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.014 [2024-10-17 19:16:51.769521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.015 [2024-10-17 19:16:51.769530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.015 [2024-10-17 19:16:51.781553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.015 [2024-10-17 19:16:51.781562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.015 [2024-10-17 19:16:51.793591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.015 [2024-10-17 19:16:51.793610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.805625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.805635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.817662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.817672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.829694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.829704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.841726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.841737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.889868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.889887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.901896] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.901908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 Running I/O for 5 seconds... 00:09:28.274 [2024-10-17 19:16:51.917420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.917442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.931868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.931890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.946144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.946162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.960246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.960265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.974484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.974502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:51.985474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:51.985492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:52.000294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:52.000312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:52.011283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:52.011300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:52.025264] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:52.025282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:52.038924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:52.038943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.274 [2024-10-17 19:16:52.052524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.274 [2024-10-17 19:16:52.052542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.534 [2024-10-17 19:16:52.066966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.534 [2024-10-17 19:16:52.066985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.534 [2024-10-17 19:16:52.082234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.534 [2024-10-17 19:16:52.082252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.534 [2024-10-17 19:16:52.096495] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.096514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.109902] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.109920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.124120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.124138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.137723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.137740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.151474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.151492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.165229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.165247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.179015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.179034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.193047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.193071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.207119] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.207137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.217916] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.217933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.232468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.232487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.246275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.246294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.257327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.257344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.271548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.271566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.285020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.285038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.299014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.299032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.535 [2024-10-17 19:16:52.312993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.535 [2024-10-17 19:16:52.313011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.326581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.326607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.340505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.340524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.353857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.353876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.367850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.367878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.381369] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.381387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.390477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.390495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.404627] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.404646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.418196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.418215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.432067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.432087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.445403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.445426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.459160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.459178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.472590] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.472613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.486280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.486298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.500010] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.500028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.513847] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.513866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.527738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.527756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.541553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.541570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.555185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.555203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.794 [2024-10-17 19:16:52.568939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.794 [2024-10-17 19:16:52.568957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.582584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.582609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.596286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.596305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.610107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.610125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.623739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.623757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.637167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.637185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.650981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.650999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.664418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.664436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.677986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.678004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.692014] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.692033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.705672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.705690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.719566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.719584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.728891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.728909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.743221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.743239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.756998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.757016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.770780] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.770798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.784431] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.784450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.797990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.798007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.811615] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.811633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.054 [2024-10-17 19:16:52.825372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.054 [2024-10-17 19:16:52.825390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.839118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.839139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.852950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.852971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.866654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.866672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.880274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.880293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.893940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.893960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 16885.00 IOPS, 131.91 MiB/s [2024-10-17T17:16:53.098Z] [2024-10-17 19:16:52.907789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.907808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.921657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.921676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.935901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.935921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.946399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.946418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.960372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.960391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.974055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.974074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:52.987614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:52.987648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.001158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.001176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.014937] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.014956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.029084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.029103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.042798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.042817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.056622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.056641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.070393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.070411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.084313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.084332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.314 [2024-10-17 19:16:53.097929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.314 [2024-10-17 19:16:53.097947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.111654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.111674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.125401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.125419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.139000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.139019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.152399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.152417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.165879] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.165898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.179880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.179898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.193214] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.193234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.207186] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.207204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.220862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.220881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.234340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.234359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.248160] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.248178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.262267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.262285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.276113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.276131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.290126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.290144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.303815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.303833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.317561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.317580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.331481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.331499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.574 [2024-10-17 19:16:53.345152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.574 [2024-10-17 19:16:53.345170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.359109] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.359128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.372809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.372827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.386652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.386670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.400067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.400085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.414159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.414178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.424834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.424852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.438922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.438940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.452105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.452123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.466113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.466135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.480365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.480383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.491065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.491082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.505518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.505536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.519237] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.519254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.533137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.533155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.547200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.547218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.560923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.560940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.574880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.574898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.588570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.588588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.602134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.602152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.834 [2024-10-17 19:16:53.615280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.834 [2024-10-17 19:16:53.615298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.629422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.629442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.642864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.642881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.651950] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.651968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.666405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.666423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.679742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.679760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.693812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.693830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.707692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.707710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.720744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.720778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.734791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.734809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.748332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.748350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.762134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.762152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.776030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.094 [2024-10-17 19:16:53.776050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.094 [2024-10-17 19:16:53.789616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.789634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.803429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.803447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.817040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.817058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.830534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.830552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.844325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.844344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.858038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.858057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.095 [2024-10-17 19:16:53.871645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.095 [2024-10-17 19:16:53.871663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.885794] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.885813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.896873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.896890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 16942.50 IOPS, 132.36 MiB/s [2024-10-17T17:16:54.139Z] [2024-10-17 19:16:53.910532] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.910549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.924581] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.924605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.938044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.938061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.952189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.952209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.964048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.964068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.978174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.978198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:53.991859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:53.991877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.005835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.005853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.019357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.019375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.033139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.033156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.047057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.047075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.060953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.060971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.075418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.075437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.085961] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.085979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.100013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.100031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.113985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.114003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.355 [2024-10-17 19:16:54.127670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.355 [2024-10-17 19:16:54.127688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.141940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.141959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.155716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.155734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.169479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.169496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.183414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.183431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.196980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.196998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.210593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.210619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.224336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.224354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.238060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.238078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.251873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.251893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.265870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.265888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.279404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.279423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.293242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.293262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.307044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.307063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.320810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.320828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.334567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.334586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.348027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.348045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.362299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.362318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.376338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.376357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.614 [2024-10-17 19:16:54.390257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.614 [2024-10-17 19:16:54.390275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.404189] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.404209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.418333] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.418352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.429512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.429530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.443446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.443465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.457183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.457201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.471351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.471370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.485650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.485669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.499854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.499873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.513346] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.513365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.527756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.527774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.541657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.541676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.555542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.555560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.569518] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.569537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.583731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.583749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.594261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.594280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.608398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.608416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.621886] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.621904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.635970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.635988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.874 [2024-10-17 19:16:54.645150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.874 [2024-10-17 19:16:54.645169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.658877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.658896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.672269] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.672288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.686319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.686337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.700334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.700352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.713612] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.713630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.727299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.727316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.741123] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.741140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.755062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.755080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.768507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.768524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.133 [2024-10-17 19:16:54.782611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.133 [2024-10-17 19:16:54.782628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.796011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.796029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.809786] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.809803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.823646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.823664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.837530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.837548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.851095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.851113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.865045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.865063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.878898] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.878916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.892628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.892647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.134 [2024-10-17 19:16:54.906261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.134 [2024-10-17 19:16:54.906280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 16967.67 IOPS, 132.56 MiB/s [2024-10-17T17:16:55.178Z] [2024-10-17 19:16:54.919895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.919914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:54.933738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.933757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:54.947720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.947738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:54.961481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.961499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:54.975327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.975346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:54.988938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:54.988957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.003130] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.003154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.014198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.014216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.028257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.028275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.041703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.041721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.055708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.055726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.069511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.069529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.082976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.082993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.097176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.097194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.110895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.110915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.125152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.125172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.135941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.135959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.150698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.150715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.161764] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.161782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.394 [2024-10-17 19:16:55.175784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.394 [2024-10-17 19:16:55.175801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.190047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.190067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.203284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.203303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.217047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.217066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.230955] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.230973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.244835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.244852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.258641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.258664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.272343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.272361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.286272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.286290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.300197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.300215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.314320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.314338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.328667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.328685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.342231] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.342248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.356106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.356124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.369843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.369861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.383866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.383883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.394372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.394389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.408317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.408335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.422223] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.422242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.654 [2024-10-17 19:16:55.435997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.654 [2024-10-17 19:16:55.436015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.449628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.449647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.463443] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.463461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.476956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.476974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.490516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.490534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.504592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.504617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.518870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.518893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.529738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.529756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.543767] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.543785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.557476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.557493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.571138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.571156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.584809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.584827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.598690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.598710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.612704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.612722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.626694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.626713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.640314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.640333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.654522] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.654541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.665234] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.665253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.679677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.679696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.914 [2024-10-17 19:16:55.693629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.914 [2024-10-17 19:16:55.693648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.707363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.707382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.720985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.721004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.734468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.734486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.747923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.747942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.761975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.761994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.775899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.775923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.789815] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.789833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.803703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.803721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.817156] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.817174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.831335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.831353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.844997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.845015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.858755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.858774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.868792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.868810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.883273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.883292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.893871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.893889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.907903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.907922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 16981.50 IOPS, 132.67 MiB/s [2024-10-17T17:16:55.958Z] [2024-10-17 19:16:55.921420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.921439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.935720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.935739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.174 [2024-10-17 19:16:55.951692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.174 [2024-10-17 19:16:55.951711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:55.965671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:55.965690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:55.979455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:55.979475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:55.993554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:55.993573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.007486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.007505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.021784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.021804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.035182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.035201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.049289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.049309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.062795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.062813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.076485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.076503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.090337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.090355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.103880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.103898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.117618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.117636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.131452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.131470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.145538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.145556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.159550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.159568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.173355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.173373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.187148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.187166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.200846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.200864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.434 [2024-10-17 19:16:56.214335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.434 [2024-10-17 19:16:56.214354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.228420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.228439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.241765] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.241783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.255472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.255490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.269211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.269229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.283104] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.283122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.297256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.297274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.311085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.311103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.325101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.693 [2024-10-17 19:16:56.325118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.693 [2024-10-17 19:16:56.339090] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.339108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.352901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.352919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.366705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.366723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.380489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.380508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.394145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.394162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.408117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.408135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.417819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.417837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.432205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.432224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.446208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.446225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.460239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.460257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.694 [2024-10-17 19:16:56.473920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.694 [2024-10-17 19:16:56.473938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.487810] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.487830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.501711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.501729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.515166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.515184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.528893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.528910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.542666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.542690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.556855] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.556873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.570707] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.570725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.584426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.584445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.598360] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.598378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.611975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.611992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.625438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.625456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.639613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.639631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.650240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.650258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.664126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.664145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.677824] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.677842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.691585] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.691611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.705218] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.705236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.719314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.719332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.953 [2024-10-17 19:16:56.732527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.953 [2024-10-17 19:16:56.732545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.212 [2024-10-17 19:16:56.746125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.212 [2024-10-17 19:16:56.746144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.212 [2024-10-17 19:16:56.760175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.760194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.774321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.774339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.788272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.788290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.801942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.801965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.815686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.815704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.829966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.829985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.840801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.840819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.855650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.855669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.866268] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.866285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.880472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.880490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.893929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.893947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.907831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.907860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 16976.00 IOPS, 132.62 MiB/s [2024-10-17T17:16:56.997Z] [2024-10-17 19:16:56.920685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.920703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 00:09:33.213 Latency(us) 00:09:33.213 [2024-10-17T17:16:56.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.213 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:33.213 Nvme1n1 : 5.01 16978.17 132.64 0.00 0.00 7531.69 3448.44 14355.50 00:09:33.213 [2024-10-17T17:16:56.997Z] =================================================================================================================== 00:09:33.213 [2024-10-17T17:16:56.997Z] Total : 16978.17 132.64 0.00 0.00 7531.69 3448.44 14355.50 00:09:33.213 [2024-10-17 19:16:56.930121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.930136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.942148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.942162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.954195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.954215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.966216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.966230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.978251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.978265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.213 [2024-10-17 19:16:56.990277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.213 [2024-10-17 19:16:56.990291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.002308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.002330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.014343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.014356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.026374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.026388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.038403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.038415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.050439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.050457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.062468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.062480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 [2024-10-17 19:16:57.074498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.550 [2024-10-17 19:16:57.074509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1978474) - No such process 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1978474 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.550 delay0 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.550 19:16:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:33.550 [2024-10-17 19:16:57.264673] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:40.186 Initializing NVMe Controllers 00:09:40.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:40.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:40.186 Initialization complete. Launching workers. 00:09:40.186 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 108 00:09:40.186 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 33 00:09:40.186 success 188, unsuccessful 207, failed 0 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.186 rmmod nvme_tcp 00:09:40.186 rmmod nvme_fabrics 00:09:40.186 rmmod nvme_keyring 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1976604 ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1976604 ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1976604' 00:09:40.186 killing process with pid 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1976604 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.186 19:17:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.094 00:09:42.094 real 0m32.036s 00:09:42.094 user 0m42.806s 00:09:42.094 sys 0m11.080s 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 ************************************ 00:09:42.094 END TEST nvmf_zcopy 00:09:42.094 ************************************ 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.094 ************************************ 00:09:42.094 START TEST nvmf_nmic 00:09:42.094 ************************************ 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:42.094 * Looking for test storage... 00:09:42.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.094 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.354 --rc genhtml_branch_coverage=1 00:09:42.354 --rc genhtml_function_coverage=1 00:09:42.354 --rc genhtml_legend=1 00:09:42.354 --rc geninfo_all_blocks=1 00:09:42.354 --rc geninfo_unexecuted_blocks=1 00:09:42.354 00:09:42.354 ' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.354 --rc genhtml_branch_coverage=1 00:09:42.354 --rc genhtml_function_coverage=1 00:09:42.354 --rc genhtml_legend=1 00:09:42.354 --rc geninfo_all_blocks=1 00:09:42.354 --rc geninfo_unexecuted_blocks=1 00:09:42.354 00:09:42.354 ' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.354 --rc genhtml_branch_coverage=1 00:09:42.354 --rc genhtml_function_coverage=1 00:09:42.354 --rc genhtml_legend=1 00:09:42.354 --rc geninfo_all_blocks=1 00:09:42.354 --rc geninfo_unexecuted_blocks=1 00:09:42.354 00:09:42.354 ' 00:09:42.354 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.354 --rc genhtml_branch_coverage=1 00:09:42.354 --rc genhtml_function_coverage=1 00:09:42.354 --rc genhtml_legend=1 00:09:42.354 --rc geninfo_all_blocks=1 00:09:42.354 --rc geninfo_unexecuted_blocks=1 00:09:42.354 00:09:42.354 ' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:42.355 19:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.930 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:48.931 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:48.931 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:48.931 Found net devices under 0000:86:00.0: cvl_0_0 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:48.931 Found net devices under 0000:86:00.1: cvl_0_1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:48.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:09:48.931 00:09:48.931 --- 10.0.0.2 ping statistics --- 00:09:48.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.931 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:09:48.931 00:09:48.931 --- 10.0.0.1 ping statistics --- 00:09:48.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.931 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1984077 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1984077 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1984077 ']' 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.931 19:17:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.931 [2024-10-17 19:17:12.007511] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:09:48.931 [2024-10-17 19:17:12.007555] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.931 [2024-10-17 19:17:12.082952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.931 [2024-10-17 19:17:12.125978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.931 [2024-10-17 19:17:12.126015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.931 [2024-10-17 19:17:12.126022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.931 [2024-10-17 19:17:12.126028] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.931 [2024-10-17 19:17:12.126033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.931 [2024-10-17 19:17:12.127454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.931 [2024-10-17 19:17:12.127564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.931 [2024-10-17 19:17:12.127670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.931 [2024-10-17 19:17:12.127671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.931 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 [2024-10-17 19:17:12.268178] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 Malloc0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 [2024-10-17 19:17:12.335396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:48.932 test case1: single bdev can't be used in multiple subsystems 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 [2024-10-17 19:17:12.363269] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:48.932 [2024-10-17 19:17:12.363289] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:48.932 [2024-10-17 19:17:12.363296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.932 request: 00:09:48.932 { 00:09:48.932 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:48.932 "namespace": { 00:09:48.932 "bdev_name": "Malloc0", 00:09:48.932 "no_auto_visible": false 00:09:48.932 }, 00:09:48.932 "method": "nvmf_subsystem_add_ns", 00:09:48.932 "req_id": 1 00:09:48.932 } 00:09:48.932 Got JSON-RPC error response 00:09:48.932 response: 00:09:48.932 { 00:09:48.932 "code": -32602, 00:09:48.932 "message": "Invalid parameters" 00:09:48.932 } 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:48.932 Adding namespace failed - expected result. 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:48.932 test case2: host connect to nvmf target in multiple paths 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:48.932 [2024-10-17 19:17:12.375401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.932 19:17:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:49.870 19:17:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:51.249 19:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:51.249 19:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.249 19:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.249 19:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:51.249 19:17:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:53.155 19:17:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:53.155 [global] 00:09:53.155 thread=1 00:09:53.155 invalidate=1 00:09:53.155 rw=write 00:09:53.155 time_based=1 00:09:53.155 runtime=1 00:09:53.155 ioengine=libaio 00:09:53.155 direct=1 00:09:53.155 bs=4096 00:09:53.155 iodepth=1 00:09:53.155 norandommap=0 00:09:53.155 numjobs=1 00:09:53.155 00:09:53.155 verify_dump=1 00:09:53.155 verify_backlog=512 00:09:53.155 verify_state_save=0 00:09:53.155 do_verify=1 00:09:53.155 verify=crc32c-intel 00:09:53.155 [job0] 00:09:53.155 filename=/dev/nvme0n1 00:09:53.155 Could not set queue depth (nvme0n1) 00:09:53.414 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:53.414 fio-3.35 00:09:53.414 Starting 1 thread 00:09:54.792 00:09:54.792 job0: (groupid=0, jobs=1): err= 0: pid=1985077: Thu Oct 17 19:17:18 2024 00:09:54.792 read: IOPS=22, BW=90.1KiB/s (92.3kB/s)(92.0KiB/1021msec) 00:09:54.792 slat (nsec): min=9732, max=24906, avg=21021.96, stdev=3465.47 00:09:54.792 clat (usec): min=418, max=41032, avg=39203.36, stdev=8454.81 00:09:54.792 lat (usec): min=429, max=41054, avg=39224.38, stdev=8457.05 00:09:54.792 clat percentiles (usec): 00:09:54.792 | 1.00th=[ 420], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:54.792 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:54.792 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:54.792 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:54.792 | 99.99th=[41157] 00:09:54.792 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:09:54.792 slat (usec): min=12, max=28340, avg=68.58, stdev=1251.89 00:09:54.792 clat (usec): min=115, max=321, avg=158.32, stdev=23.61 00:09:54.792 lat (usec): min=129, max=28553, avg=226.90, stdev=1254.54 00:09:54.792 clat percentiles (usec): 00:09:54.792 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:09:54.792 | 30.00th=[ 139], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:09:54.792 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:09:54.792 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 322], 99.95th=[ 322], 00:09:54.792 | 99.99th=[ 322] 00:09:54.792 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:54.792 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:54.792 lat (usec) : 250=95.33%, 500=0.56% 00:09:54.792 lat (msec) : 50=4.11% 00:09:54.792 cpu : usr=0.29%, sys=1.08%, ctx=538, majf=0, minf=1 00:09:54.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:54.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.792 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:54.792 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:54.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:54.792 00:09:54.792 Run status group 0 (all jobs): 00:09:54.792 READ: bw=90.1KiB/s (92.3kB/s), 90.1KiB/s-90.1KiB/s (92.3kB/s-92.3kB/s), io=92.0KiB (94.2kB), run=1021-1021msec 00:09:54.792 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:09:54.792 00:09:54.792 Disk stats (read/write): 00:09:54.792 nvme0n1: ios=47/512, merge=0/0, ticks=1766/68, in_queue=1834, util=98.60% 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:54.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.792 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.792 rmmod nvme_tcp 00:09:54.792 rmmod nvme_fabrics 00:09:54.792 rmmod nvme_keyring 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1984077 ']' 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1984077 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1984077 ']' 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1984077 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1984077 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1984077' 00:09:55.052 killing process with pid 1984077 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1984077 00:09:55.052 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1984077 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.311 19:17:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.218 00:09:57.218 real 0m15.159s 00:09:57.218 user 0m33.840s 00:09:57.218 sys 0m5.292s 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:57.218 ************************************ 00:09:57.218 END TEST nvmf_nmic 00:09:57.218 ************************************ 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.218 19:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.218 ************************************ 00:09:57.218 START TEST nvmf_fio_target 00:09:57.218 ************************************ 00:09:57.219 19:17:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:57.479 * Looking for test storage... 00:09:57.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.479 --rc genhtml_branch_coverage=1 00:09:57.479 --rc genhtml_function_coverage=1 00:09:57.479 --rc genhtml_legend=1 00:09:57.479 --rc geninfo_all_blocks=1 00:09:57.479 --rc geninfo_unexecuted_blocks=1 00:09:57.479 00:09:57.479 ' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.479 --rc genhtml_branch_coverage=1 00:09:57.479 --rc genhtml_function_coverage=1 00:09:57.479 --rc genhtml_legend=1 00:09:57.479 --rc geninfo_all_blocks=1 00:09:57.479 --rc geninfo_unexecuted_blocks=1 00:09:57.479 00:09:57.479 ' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.479 --rc genhtml_branch_coverage=1 00:09:57.479 --rc genhtml_function_coverage=1 00:09:57.479 --rc genhtml_legend=1 00:09:57.479 --rc geninfo_all_blocks=1 00:09:57.479 --rc geninfo_unexecuted_blocks=1 00:09:57.479 00:09:57.479 ' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:57.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.479 --rc genhtml_branch_coverage=1 00:09:57.479 --rc genhtml_function_coverage=1 00:09:57.479 --rc genhtml_legend=1 00:09:57.479 --rc geninfo_all_blocks=1 00:09:57.479 --rc geninfo_unexecuted_blocks=1 00:09:57.479 00:09:57.479 ' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.479 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.480 19:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.060 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.060 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.060 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.060 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.060 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:04.061 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:04.061 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:04.061 Found net devices under 0000:86:00.0: cvl_0_0 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:04.061 Found net devices under 0000:86:00.1: cvl_0_1 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:04.061 19:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:04.061 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:04.061 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:04.061 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:04.061 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:04.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:10:04.062 00:10:04.062 --- 10.0.0.2 ping statistics --- 00:10:04.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.062 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:04.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:04.062 00:10:04.062 --- 10.0.0.1 ping statistics --- 00:10:04.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.062 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1988925 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1988925 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1988925 ']' 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.062 [2024-10-17 19:17:27.267281] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:10:04.062 [2024-10-17 19:17:27.267324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.062 [2024-10-17 19:17:27.346900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.062 [2024-10-17 19:17:27.389170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.062 [2024-10-17 19:17:27.389206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.062 [2024-10-17 19:17:27.389213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.062 [2024-10-17 19:17:27.389219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.062 [2024-10-17 19:17:27.389224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.062 [2024-10-17 19:17:27.390642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.062 [2024-10-17 19:17:27.390754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.062 [2024-10-17 19:17:27.390860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.062 [2024-10-17 19:17:27.390862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:04.062 [2024-10-17 19:17:27.683947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.062 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.322 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:04.322 19:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.581 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:04.581 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.839 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:04.839 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:04.839 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:04.840 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:05.098 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.357 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:05.357 19:17:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.616 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:05.616 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:05.616 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:05.616 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:05.874 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:06.133 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.133 19:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:06.392 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:06.392 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:06.651 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.651 [2024-10-17 19:17:30.393589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.651 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:06.910 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:07.169 19:17:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:08.546 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:08.546 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:08.546 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.547 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:08.547 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:08.547 19:17:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:10.452 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:10.452 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:10.452 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:10.452 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:10.453 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:10.453 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:10.453 19:17:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:10.453 [global] 00:10:10.453 thread=1 00:10:10.453 invalidate=1 00:10:10.453 rw=write 00:10:10.453 time_based=1 00:10:10.453 runtime=1 00:10:10.453 ioengine=libaio 00:10:10.453 direct=1 00:10:10.453 bs=4096 00:10:10.453 iodepth=1 00:10:10.453 norandommap=0 00:10:10.453 numjobs=1 00:10:10.453 00:10:10.453 verify_dump=1 00:10:10.453 verify_backlog=512 00:10:10.453 verify_state_save=0 00:10:10.453 do_verify=1 00:10:10.453 verify=crc32c-intel 00:10:10.453 [job0] 00:10:10.453 filename=/dev/nvme0n1 00:10:10.453 [job1] 00:10:10.453 filename=/dev/nvme0n2 00:10:10.453 [job2] 00:10:10.453 filename=/dev/nvme0n3 00:10:10.453 [job3] 00:10:10.453 filename=/dev/nvme0n4 00:10:10.453 Could not set queue depth (nvme0n1) 00:10:10.453 Could not set queue depth (nvme0n2) 00:10:10.453 Could not set queue depth (nvme0n3) 00:10:10.453 Could not set queue depth (nvme0n4) 00:10:10.712 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.712 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.712 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.712 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:10.712 fio-3.35 00:10:10.712 Starting 4 threads 00:10:12.100 00:10:12.100 job0: (groupid=0, jobs=1): err= 0: pid=1990275: Thu Oct 17 19:17:35 2024 00:10:12.100 read: IOPS=21, BW=86.4KiB/s (88.5kB/s)(88.0KiB/1018msec) 00:10:12.100 slat (nsec): min=9743, max=26454, avg=23900.14, stdev=3481.04 00:10:12.100 clat (usec): min=40486, max=42000, avg=41037.70, stdev=314.95 00:10:12.100 lat (usec): min=40495, max=42024, avg=41061.60, stdev=316.03 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:12.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:12.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:12.100 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:12.100 | 99.99th=[42206] 00:10:12.100 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:12.100 slat (usec): min=10, max=18842, avg=49.54, stdev=832.17 00:10:12.100 clat (usec): min=126, max=946, avg=169.99, stdev=52.71 00:10:12.100 lat (usec): min=138, max=19086, avg=219.53, stdev=837.12 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:10:12.100 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:10:12.100 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 200], 00:10:12.100 | 99.00th=[ 273], 99.50th=[ 644], 99.90th=[ 947], 99.95th=[ 947], 00:10:12.100 | 99.99th=[ 947] 00:10:12.100 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.100 lat (usec) : 250=93.63%, 500=1.50%, 750=0.56%, 1000=0.19% 00:10:12.100 lat (msec) : 50=4.12% 00:10:12.100 cpu : usr=0.49%, sys=0.88%, ctx=536, majf=0, minf=1 00:10:12.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.100 job1: (groupid=0, jobs=1): err= 0: pid=1990276: Thu Oct 17 19:17:35 2024 00:10:12.100 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:10:12.100 slat (nsec): min=10201, max=23342, avg=22271.30, stdev=2688.69 00:10:12.100 clat (usec): min=40867, max=41950, avg=41021.10, stdev=213.60 00:10:12.100 lat (usec): min=40890, max=41973, avg=41043.37, stdev=213.05 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:12.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:12.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:12.100 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:12.100 | 99.99th=[42206] 00:10:12.100 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:10:12.100 slat (nsec): min=9433, max=39656, avg=11214.36, stdev=2356.15 00:10:12.100 clat (usec): min=131, max=362, avg=171.15, stdev=25.01 00:10:12.100 lat (usec): min=142, max=394, avg=182.37, stdev=25.82 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:10:12.100 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:12.100 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 229], 00:10:12.100 | 99.00th=[ 260], 99.50th=[ 302], 99.90th=[ 363], 99.95th=[ 363], 00:10:12.100 | 99.99th=[ 363] 00:10:12.100 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.100 lat (usec) : 250=94.21%, 500=1.50% 00:10:12.100 lat (msec) : 50=4.30% 00:10:12.100 cpu : usr=0.39%, sys=0.48%, ctx=535, majf=0, minf=2 00:10:12.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.100 job2: (groupid=0, jobs=1): err= 0: pid=1990277: Thu Oct 17 19:17:35 2024 00:10:12.100 read: IOPS=23, BW=94.3KiB/s (96.6kB/s)(96.0KiB/1018msec) 00:10:12.100 slat (nsec): min=9229, max=24339, avg=21785.25, stdev=4735.35 00:10:12.100 clat (usec): min=208, max=42064, avg=37655.09, stdev=11532.36 00:10:12.100 lat (usec): min=233, max=42087, avg=37676.88, stdev=11531.68 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[ 210], 5.00th=[ 245], 10.00th=[40633], 20.00th=[40633], 00:10:12.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:12.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:12.100 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:12.100 | 99.99th=[42206] 00:10:12.100 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:12.100 slat (usec): min=9, max=19000, avg=48.70, stdev=839.21 00:10:12.100 clat (usec): min=131, max=275, avg=169.83, stdev=20.31 00:10:12.100 lat (usec): min=143, max=19270, avg=218.52, stdev=843.92 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 153], 00:10:12.100 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:10:12.100 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 204], 00:10:12.100 | 99.00th=[ 235], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 277], 00:10:12.100 | 99.99th=[ 277] 00:10:12.100 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.100 lat (usec) : 250=95.15%, 500=0.75% 00:10:12.100 lat (msec) : 50=4.10% 00:10:12.100 cpu : usr=0.39%, sys=0.49%, ctx=538, majf=0, minf=1 00:10:12.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.100 job3: (groupid=0, jobs=1): err= 0: pid=1990278: Thu Oct 17 19:17:35 2024 00:10:12.100 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:10:12.100 slat (nsec): min=10606, max=24354, avg=23167.27, stdev=2870.84 00:10:12.100 clat (usec): min=40865, max=41803, avg=41020.92, stdev=187.62 00:10:12.100 lat (usec): min=40889, max=41827, avg=41044.09, stdev=187.04 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:12.100 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:12.100 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:12.100 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:12.100 | 99.99th=[41681] 00:10:12.100 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:12.100 slat (nsec): min=9989, max=44589, avg=13158.28, stdev=2919.39 00:10:12.100 clat (usec): min=130, max=327, avg=180.16, stdev=23.49 00:10:12.100 lat (usec): min=141, max=367, avg=193.32, stdev=24.57 00:10:12.100 clat percentiles (usec): 00:10:12.100 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:10:12.100 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:10:12.100 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 215], 00:10:12.100 | 99.00th=[ 260], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 326], 00:10:12.100 | 99.99th=[ 326] 00:10:12.100 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:10:12.100 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:12.100 lat (usec) : 250=94.38%, 500=1.50% 00:10:12.100 lat (msec) : 50=4.12% 00:10:12.100 cpu : usr=0.40%, sys=0.50%, ctx=535, majf=0, minf=1 00:10:12.100 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:12.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.100 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.100 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:12.100 00:10:12.100 Run status group 0 (all jobs): 00:10:12.100 READ: bw=350KiB/s (359kB/s), 86.4KiB/s-94.3KiB/s (88.5kB/s-96.6kB/s), io=364KiB (373kB), run=1004-1039msec 00:10:12.100 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2040KiB/s (2018kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1039msec 00:10:12.100 00:10:12.100 Disk stats (read/write): 00:10:12.100 nvme0n1: ios=39/512, merge=0/0, ticks=1519/84, in_queue=1603, util=87.06% 00:10:12.100 nvme0n2: ios=67/512, merge=0/0, ticks=758/90, in_queue=848, util=85.38% 00:10:12.100 nvme0n3: ios=40/512, merge=0/0, ticks=1560/87, in_queue=1647, util=95.11% 00:10:12.100 nvme0n4: ios=74/512, merge=0/0, ticks=858/90, in_queue=948, util=97.34% 00:10:12.100 19:17:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:12.100 [global] 00:10:12.100 thread=1 00:10:12.100 invalidate=1 00:10:12.100 rw=randwrite 00:10:12.100 time_based=1 00:10:12.100 runtime=1 00:10:12.100 ioengine=libaio 00:10:12.100 direct=1 00:10:12.100 bs=4096 00:10:12.100 iodepth=1 00:10:12.100 norandommap=0 00:10:12.100 numjobs=1 00:10:12.100 00:10:12.100 verify_dump=1 00:10:12.100 verify_backlog=512 00:10:12.100 verify_state_save=0 00:10:12.100 do_verify=1 00:10:12.100 verify=crc32c-intel 00:10:12.100 [job0] 00:10:12.100 filename=/dev/nvme0n1 00:10:12.100 [job1] 00:10:12.100 filename=/dev/nvme0n2 00:10:12.100 [job2] 00:10:12.100 filename=/dev/nvme0n3 00:10:12.100 [job3] 00:10:12.100 filename=/dev/nvme0n4 00:10:12.100 Could not set queue depth (nvme0n1) 00:10:12.100 Could not set queue depth (nvme0n2) 00:10:12.100 Could not set queue depth (nvme0n3) 00:10:12.101 Could not set queue depth (nvme0n4) 00:10:12.360 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.360 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.360 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.360 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:12.360 fio-3.35 00:10:12.360 Starting 4 threads 00:10:13.762 00:10:13.762 job0: (groupid=0, jobs=1): err= 0: pid=1990649: Thu Oct 17 19:17:37 2024 00:10:13.762 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:10:13.762 slat (nsec): min=10486, max=24844, avg=18647.77, stdev=5268.16 00:10:13.762 clat (usec): min=40547, max=42063, avg=41143.54, stdev=424.54 00:10:13.762 lat (usec): min=40557, max=42074, avg=41162.19, stdev=424.04 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:13.762 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.762 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:13.762 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.762 | 99.99th=[42206] 00:10:13.762 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:10:13.762 slat (nsec): min=10446, max=49356, avg=12733.84, stdev=2894.74 00:10:13.762 clat (usec): min=133, max=368, avg=184.50, stdev=21.48 00:10:13.762 lat (usec): min=144, max=380, avg=197.24, stdev=22.30 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 159], 20.00th=[ 169], 00:10:13.762 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:10:13.762 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 208], 95.00th=[ 217], 00:10:13.762 | 99.00th=[ 235], 99.50th=[ 269], 99.90th=[ 371], 99.95th=[ 371], 00:10:13.762 | 99.99th=[ 371] 00:10:13.762 bw ( KiB/s): min= 4096, max= 4096, per=25.07%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.762 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.762 lat (usec) : 250=94.94%, 500=0.94% 00:10:13.762 lat (msec) : 50=4.12% 00:10:13.762 cpu : usr=0.50%, sys=0.89%, ctx=538, majf=0, minf=1 00:10:13.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.762 job1: (groupid=0, jobs=1): err= 0: pid=1990650: Thu Oct 17 19:17:37 2024 00:10:13.762 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:10:13.762 slat (nsec): min=10105, max=23347, avg=21786.22, stdev=2633.14 00:10:13.762 clat (usec): min=40868, max=42081, avg=41117.01, stdev=375.83 00:10:13.762 lat (usec): min=40890, max=42104, avg=41138.80, stdev=375.31 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:13.762 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.762 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:13.762 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.762 | 99.99th=[42206] 00:10:13.762 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:10:13.762 slat (nsec): min=9042, max=38727, avg=10186.75, stdev=1933.35 00:10:13.762 clat (usec): min=123, max=619, avg=157.61, stdev=37.98 00:10:13.762 lat (usec): min=133, max=629, avg=167.79, stdev=38.26 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 137], 20.00th=[ 141], 00:10:13.762 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:10:13.762 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 186], 00:10:13.762 | 99.00th=[ 273], 99.50th=[ 498], 99.90th=[ 619], 99.95th=[ 619], 00:10:13.762 | 99.99th=[ 619] 00:10:13.762 bw ( KiB/s): min= 4096, max= 4096, per=25.07%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.762 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.762 lat (usec) : 250=94.39%, 500=0.93%, 750=0.37% 00:10:13.762 lat (msec) : 50=4.30% 00:10:13.762 cpu : usr=0.29%, sys=0.48%, ctx=535, majf=0, minf=1 00:10:13.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.762 job2: (groupid=0, jobs=1): err= 0: pid=1990656: Thu Oct 17 19:17:37 2024 00:10:13.762 read: IOPS=25, BW=100KiB/s (102kB/s)(104KiB/1040msec) 00:10:13.762 slat (nsec): min=7846, max=26738, avg=12349.08, stdev=4840.90 00:10:13.762 clat (usec): min=243, max=41996, avg=36407.67, stdev=13290.31 00:10:13.762 lat (usec): min=253, max=42006, avg=36420.02, stdev=13287.99 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 245], 5.00th=[ 343], 10.00th=[ 433], 20.00th=[40633], 00:10:13.762 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:13.762 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:13.762 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:13.762 | 99.99th=[42206] 00:10:13.762 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:13.762 slat (nsec): min=9340, max=38134, avg=10907.02, stdev=2174.56 00:10:13.762 clat (usec): min=134, max=275, avg=169.19, stdev=18.08 00:10:13.762 lat (usec): min=145, max=313, avg=180.10, stdev=18.62 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:13.762 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:10:13.762 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:10:13.762 | 99.00th=[ 229], 99.50th=[ 262], 99.90th=[ 277], 99.95th=[ 277], 00:10:13.762 | 99.99th=[ 277] 00:10:13.762 bw ( KiB/s): min= 4096, max= 4096, per=25.07%, avg=4096.00, stdev= 0.00, samples=1 00:10:13.762 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:13.762 lat (usec) : 250=94.80%, 500=0.93% 00:10:13.762 lat (msec) : 50=4.28% 00:10:13.762 cpu : usr=0.29%, sys=0.48%, ctx=538, majf=0, minf=1 00:10:13.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 issued rwts: total=26,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.762 job3: (groupid=0, jobs=1): err= 0: pid=1990657: Thu Oct 17 19:17:37 2024 00:10:13.762 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:13.762 slat (nsec): min=7303, max=40390, avg=8290.38, stdev=1389.74 00:10:13.762 clat (usec): min=168, max=395, avg=199.88, stdev=18.99 00:10:13.762 lat (usec): min=176, max=403, avg=208.17, stdev=19.03 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:10:13.762 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:10:13.762 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 245], 00:10:13.762 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 297], 99.95th=[ 388], 00:10:13.762 | 99.99th=[ 396] 00:10:13.762 write: IOPS=2709, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:10:13.762 slat (nsec): min=10582, max=45122, avg=12101.00, stdev=2094.96 00:10:13.762 clat (usec): min=108, max=385, avg=154.26, stdev=25.14 00:10:13.762 lat (usec): min=132, max=397, avg=166.36, stdev=25.68 00:10:13.762 clat percentiles (usec): 00:10:13.762 | 1.00th=[ 127], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:10:13.762 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:10:13.762 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 200], 00:10:13.762 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 334], 99.95th=[ 347], 00:10:13.762 | 99.99th=[ 388] 00:10:13.762 bw ( KiB/s): min=12288, max=12288, per=75.21%, avg=12288.00, stdev= 0.00, samples=1 00:10:13.762 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:13.762 lat (usec) : 250=97.40%, 500=2.60% 00:10:13.762 cpu : usr=4.20%, sys=8.70%, ctx=5273, majf=0, minf=1 00:10:13.762 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:13.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.762 issued rwts: total=2560,2712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.762 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:13.762 00:10:13.762 Run status group 0 (all jobs): 00:10:13.762 READ: bw=9.88MiB/s (10.4MB/s), 87.2KiB/s-9.99MiB/s (89.3kB/s-10.5MB/s), io=10.3MiB (10.8MB), run=1001-1040msec 00:10:13.762 WRITE: bw=16.0MiB/s (16.7MB/s), 1969KiB/s-10.6MiB/s (2016kB/s-11.1MB/s), io=16.6MiB (17.4MB), run=1001-1040msec 00:10:13.762 00:10:13.762 Disk stats (read/write): 00:10:13.762 nvme0n1: ios=68/512, merge=0/0, ticks=786/91, in_queue=877, util=86.27% 00:10:13.762 nvme0n2: ios=68/512, merge=0/0, ticks=810/74, in_queue=884, util=91.38% 00:10:13.762 nvme0n3: ios=78/512, merge=0/0, ticks=816/81, in_queue=897, util=94.70% 00:10:13.762 nvme0n4: ios=2071/2534, merge=0/0, ticks=1295/351, in_queue=1646, util=94.35% 00:10:13.762 19:17:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:13.762 [global] 00:10:13.762 thread=1 00:10:13.762 invalidate=1 00:10:13.762 rw=write 00:10:13.762 time_based=1 00:10:13.762 runtime=1 00:10:13.762 ioengine=libaio 00:10:13.762 direct=1 00:10:13.762 bs=4096 00:10:13.762 iodepth=128 00:10:13.762 norandommap=0 00:10:13.762 numjobs=1 00:10:13.762 00:10:13.762 verify_dump=1 00:10:13.762 verify_backlog=512 00:10:13.762 verify_state_save=0 00:10:13.762 do_verify=1 00:10:13.762 verify=crc32c-intel 00:10:13.762 [job0] 00:10:13.762 filename=/dev/nvme0n1 00:10:13.762 [job1] 00:10:13.762 filename=/dev/nvme0n2 00:10:13.762 [job2] 00:10:13.762 filename=/dev/nvme0n3 00:10:13.762 [job3] 00:10:13.762 filename=/dev/nvme0n4 00:10:13.762 Could not set queue depth (nvme0n1) 00:10:13.762 Could not set queue depth (nvme0n2) 00:10:13.762 Could not set queue depth (nvme0n3) 00:10:13.762 Could not set queue depth (nvme0n4) 00:10:14.028 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.028 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.028 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.028 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:14.028 fio-3.35 00:10:14.028 Starting 4 threads 00:10:15.405 00:10:15.405 job0: (groupid=0, jobs=1): err= 0: pid=1991024: Thu Oct 17 19:17:38 2024 00:10:15.405 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:10:15.405 slat (nsec): min=1303, max=8543.6k, avg=85665.08, stdev=526659.91 00:10:15.405 clat (usec): min=3757, max=24425, avg=10823.40, stdev=2208.88 00:10:15.405 lat (usec): min=3763, max=24428, avg=10909.07, stdev=2250.59 00:10:15.405 clat percentiles (usec): 00:10:15.405 | 1.00th=[ 6652], 5.00th=[ 7898], 10.00th=[ 8094], 20.00th=[ 8848], 00:10:15.405 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10683], 60.00th=[11338], 00:10:15.405 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12649], 95.00th=[13960], 00:10:15.405 | 99.00th=[18482], 99.50th=[21365], 99.90th=[23462], 99.95th=[24511], 00:10:15.405 | 99.99th=[24511] 00:10:15.405 write: IOPS=5645, BW=22.1MiB/s (23.1MB/s)(22.1MiB/1002msec); 0 zone resets 00:10:15.405 slat (usec): min=2, max=9302, avg=83.37, stdev=415.37 00:10:15.405 clat (usec): min=1763, max=33367, avg=11659.69, stdev=4461.78 00:10:15.405 lat (usec): min=1772, max=33378, avg=11743.07, stdev=4490.09 00:10:15.405 clat percentiles (usec): 00:10:15.405 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 7898], 20.00th=[ 8848], 00:10:15.405 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10814], 60.00th=[11469], 00:10:15.405 | 70.00th=[12125], 80.00th=[12518], 90.00th=[16712], 95.00th=[22676], 00:10:15.405 | 99.00th=[30278], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:10:15.405 | 99.99th=[33424] 00:10:15.405 bw ( KiB/s): min=21624, max=23432, per=31.06%, avg=22528.00, stdev=1278.45, samples=2 00:10:15.405 iops : min= 5406, max= 5858, avg=5632.00, stdev=319.61, samples=2 00:10:15.405 lat (msec) : 2=0.04%, 4=0.39%, 10=30.85%, 20=65.03%, 50=3.69% 00:10:15.405 cpu : usr=3.40%, sys=6.99%, ctx=595, majf=0, minf=1 00:10:15.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:15.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.405 issued rwts: total=5632,5657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.405 job1: (groupid=0, jobs=1): err= 0: pid=1991025: Thu Oct 17 19:17:38 2024 00:10:15.405 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:15.405 slat (nsec): min=1390, max=16709k, avg=107516.13, stdev=842795.86 00:10:15.405 clat (usec): min=2129, max=38632, avg=13046.64, stdev=4815.99 00:10:15.405 lat (usec): min=2966, max=38638, avg=13154.16, stdev=4861.54 00:10:15.405 clat percentiles (usec): 00:10:15.405 | 1.00th=[ 4228], 5.00th=[ 8225], 10.00th=[ 9503], 20.00th=[10028], 00:10:15.405 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11600], 60.00th=[12125], 00:10:15.405 | 70.00th=[13698], 80.00th=[16712], 90.00th=[18744], 95.00th=[21103], 00:10:15.405 | 99.00th=[32900], 99.50th=[35914], 99.90th=[37487], 99.95th=[38536], 00:10:15.405 | 99.99th=[38536] 00:10:15.405 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:15.405 slat (usec): min=2, max=17676, avg=82.38, stdev=499.89 00:10:15.405 clat (usec): min=1432, max=38616, avg=11785.78, stdev=4388.76 00:10:15.405 lat (usec): min=1448, max=38620, avg=11868.17, stdev=4439.94 00:10:15.405 clat percentiles (usec): 00:10:15.405 | 1.00th=[ 2769], 5.00th=[ 4883], 10.00th=[ 6783], 20.00th=[ 9634], 00:10:15.405 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10683], 60.00th=[11731], 00:10:15.406 | 70.00th=[11994], 80.00th=[16188], 90.00th=[17695], 95.00th=[20317], 00:10:15.406 | 99.00th=[23725], 99.50th=[25560], 99.90th=[34866], 99.95th=[35390], 00:10:15.406 | 99.99th=[38536] 00:10:15.406 bw ( KiB/s): min=16688, max=24272, per=28.23%, avg=20480.00, stdev=5362.70, samples=2 00:10:15.406 iops : min= 4172, max= 6068, avg=5120.00, stdev=1340.67, samples=2 00:10:15.406 lat (msec) : 2=0.06%, 4=2.00%, 10=20.13%, 20=70.72%, 50=7.09% 00:10:15.406 cpu : usr=2.89%, sys=5.78%, ctx=664, majf=0, minf=2 00:10:15.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:15.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.406 issued rwts: total=5120,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.406 job2: (groupid=0, jobs=1): err= 0: pid=1991026: Thu Oct 17 19:17:38 2024 00:10:15.406 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:15.406 slat (nsec): min=1412, max=17030k, avg=129662.41, stdev=929653.21 00:10:15.406 clat (usec): min=6124, max=35584, avg=16784.16, stdev=4540.51 00:10:15.406 lat (usec): min=6135, max=36810, avg=16913.82, stdev=4603.37 00:10:15.406 clat percentiles (usec): 00:10:15.406 | 1.00th=[10028], 5.00th=[10814], 10.00th=[11469], 20.00th=[13566], 00:10:15.406 | 30.00th=[13829], 40.00th=[14222], 50.00th=[16188], 60.00th=[17171], 00:10:15.406 | 70.00th=[18482], 80.00th=[21365], 90.00th=[22938], 95.00th=[24773], 00:10:15.406 | 99.00th=[29230], 99.50th=[32375], 99.90th=[33424], 99.95th=[33424], 00:10:15.406 | 99.99th=[35390] 00:10:15.406 write: IOPS=4022, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1008msec); 0 zone resets 00:10:15.406 slat (usec): min=2, max=15680, avg=120.67, stdev=645.79 00:10:15.406 clat (usec): min=504, max=37033, avg=16654.88, stdev=5984.44 00:10:15.406 lat (usec): min=517, max=37058, avg=16775.55, stdev=6046.05 00:10:15.406 clat percentiles (usec): 00:10:15.406 | 1.00th=[ 1729], 5.00th=[ 7439], 10.00th=[ 9241], 20.00th=[12911], 00:10:15.406 | 30.00th=[13698], 40.00th=[13829], 50.00th=[15926], 60.00th=[17171], 00:10:15.406 | 70.00th=[19792], 80.00th=[22414], 90.00th=[24773], 95.00th=[27395], 00:10:15.406 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33817], 99.95th=[33817], 00:10:15.406 | 99.99th=[36963] 00:10:15.406 bw ( KiB/s): min=14664, max=16752, per=21.66%, avg=15708.00, stdev=1476.44, samples=2 00:10:15.406 iops : min= 3666, max= 4188, avg=3927.00, stdev=369.11, samples=2 00:10:15.406 lat (usec) : 750=0.14%, 1000=0.33% 00:10:15.406 lat (msec) : 2=0.33%, 4=0.16%, 10=5.13%, 20=67.64%, 50=26.27% 00:10:15.406 cpu : usr=3.08%, sys=5.56%, ctx=429, majf=0, minf=1 00:10:15.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:15.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.406 issued rwts: total=3584,4055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.406 job3: (groupid=0, jobs=1): err= 0: pid=1991027: Thu Oct 17 19:17:38 2024 00:10:15.406 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:10:15.406 slat (nsec): min=1447, max=13004k, avg=132162.77, stdev=888855.22 00:10:15.406 clat (usec): min=5991, max=43010, avg=15794.03, stdev=5438.75 00:10:15.406 lat (usec): min=5997, max=43019, avg=15926.20, stdev=5531.10 00:10:15.406 clat percentiles (usec): 00:10:15.406 | 1.00th=[ 6063], 5.00th=[ 8979], 10.00th=[11731], 20.00th=[13042], 00:10:15.406 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[14746], 00:10:15.406 | 70.00th=[16057], 80.00th=[19268], 90.00th=[22676], 95.00th=[24773], 00:10:15.406 | 99.00th=[36439], 99.50th=[38011], 99.90th=[43254], 99.95th=[43254], 00:10:15.406 | 99.99th=[43254] 00:10:15.406 write: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1008msec); 0 zone resets 00:10:15.406 slat (usec): min=2, max=35929, avg=163.35, stdev=1105.19 00:10:15.406 clat (usec): min=3399, max=75386, avg=23019.37, stdev=13415.83 00:10:15.406 lat (usec): min=3407, max=75402, avg=23182.72, stdev=13506.44 00:10:15.406 clat percentiles (usec): 00:10:15.406 | 1.00th=[ 3621], 5.00th=[10290], 10.00th=[12387], 20.00th=[13173], 00:10:15.406 | 30.00th=[13435], 40.00th=[13566], 50.00th=[17171], 60.00th=[22676], 00:10:15.406 | 70.00th=[27132], 80.00th=[33817], 90.00th=[41157], 95.00th=[51119], 00:10:15.406 | 99.00th=[66847], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:10:15.406 | 99.99th=[74974] 00:10:15.406 bw ( KiB/s): min=11864, max=14688, per=18.30%, avg=13276.00, stdev=1996.87, samples=2 00:10:15.406 iops : min= 2966, max= 3672, avg=3319.00, stdev=499.22, samples=2 00:10:15.406 lat (msec) : 4=0.64%, 10=4.83%, 20=60.16%, 50=31.32%, 100=3.04% 00:10:15.406 cpu : usr=3.18%, sys=3.77%, ctx=339, majf=0, minf=1 00:10:15.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:15.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:15.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:15.406 issued rwts: total=3072,3447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:15.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:15.406 00:10:15.406 Run status group 0 (all jobs): 00:10:15.406 READ: bw=67.5MiB/s (70.7MB/s), 11.9MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1002-1008msec 00:10:15.406 WRITE: bw=70.8MiB/s (74.3MB/s), 13.4MiB/s-22.1MiB/s (14.0MB/s-23.1MB/s), io=71.4MiB (74.9MB), run=1002-1008msec 00:10:15.406 00:10:15.406 Disk stats (read/write): 00:10:15.406 nvme0n1: ios=4658/4851, merge=0/0, ticks=29890/35126, in_queue=65016, util=86.66% 00:10:15.406 nvme0n2: ios=4119/4263, merge=0/0, ticks=54682/51189, in_queue=105871, util=98.37% 00:10:15.406 nvme0n3: ios=3177/3584, merge=0/0, ticks=44577/45852, in_queue=90429, util=96.88% 00:10:15.406 nvme0n4: ios=2586/3039, merge=0/0, ticks=19095/30854, in_queue=49949, util=94.96% 00:10:15.406 19:17:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:15.406 [global] 00:10:15.406 thread=1 00:10:15.406 invalidate=1 00:10:15.406 rw=randwrite 00:10:15.406 time_based=1 00:10:15.406 runtime=1 00:10:15.406 ioengine=libaio 00:10:15.406 direct=1 00:10:15.406 bs=4096 00:10:15.406 iodepth=128 00:10:15.406 norandommap=0 00:10:15.406 numjobs=1 00:10:15.406 00:10:15.406 verify_dump=1 00:10:15.406 verify_backlog=512 00:10:15.406 verify_state_save=0 00:10:15.406 do_verify=1 00:10:15.406 verify=crc32c-intel 00:10:15.406 [job0] 00:10:15.406 filename=/dev/nvme0n1 00:10:15.406 [job1] 00:10:15.406 filename=/dev/nvme0n2 00:10:15.406 [job2] 00:10:15.406 filename=/dev/nvme0n3 00:10:15.406 [job3] 00:10:15.406 filename=/dev/nvme0n4 00:10:15.406 Could not set queue depth (nvme0n1) 00:10:15.406 Could not set queue depth (nvme0n2) 00:10:15.406 Could not set queue depth (nvme0n3) 00:10:15.406 Could not set queue depth (nvme0n4) 00:10:15.406 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.406 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.406 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.406 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:15.406 fio-3.35 00:10:15.406 Starting 4 threads 00:10:16.784 00:10:16.784 job0: (groupid=0, jobs=1): err= 0: pid=1991404: Thu Oct 17 19:17:40 2024 00:10:16.784 read: IOPS=3172, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1008msec) 00:10:16.784 slat (nsec): min=1248, max=25007k, avg=130683.07, stdev=984275.04 00:10:16.784 clat (usec): min=1359, max=62942, avg=15871.38, stdev=7573.34 00:10:16.784 lat (usec): min=3055, max=62957, avg=16002.06, stdev=7645.29 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 4948], 5.00th=[ 5473], 10.00th=[ 9241], 20.00th=[10421], 00:10:16.784 | 30.00th=[12125], 40.00th=[13304], 50.00th=[15926], 60.00th=[16450], 00:10:16.784 | 70.00th=[16909], 80.00th=[19006], 90.00th=[22938], 95.00th=[28443], 00:10:16.784 | 99.00th=[57934], 99.50th=[58459], 99.90th=[58459], 99.95th=[58459], 00:10:16.784 | 99.99th=[63177] 00:10:16.784 write: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec); 0 zone resets 00:10:16.784 slat (nsec): min=1924, max=33735k, avg=152138.80, stdev=1031993.90 00:10:16.784 clat (usec): min=710, max=111680, avg=21464.99, stdev=18487.34 00:10:16.784 lat (usec): min=1628, max=111694, avg=21617.13, stdev=18604.77 00:10:16.784 clat percentiles (msec): 00:10:16.784 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 11], 00:10:16.784 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 20], 00:10:16.784 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 36], 95.00th=[ 57], 00:10:16.784 | 99.00th=[ 108], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:10:16.784 | 99.99th=[ 112] 00:10:16.784 bw ( KiB/s): min=14160, max=14496, per=20.74%, avg=14328.00, stdev=237.59, samples=2 00:10:16.784 iops : min= 3540, max= 3624, avg=3582.00, stdev=59.40, samples=2 00:10:16.784 lat (usec) : 750=0.01% 00:10:16.784 lat (msec) : 2=0.03%, 4=0.27%, 10=14.27%, 20=56.28%, 50=25.17% 00:10:16.784 lat (msec) : 100=2.93%, 250=1.03% 00:10:16.784 cpu : usr=2.48%, sys=4.77%, ctx=342, majf=0, minf=1 00:10:16.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:16.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.784 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.784 job1: (groupid=0, jobs=1): err= 0: pid=1991405: Thu Oct 17 19:17:40 2024 00:10:16.784 read: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(17.4MiB/1008msec) 00:10:16.784 slat (nsec): min=1165, max=20415k, avg=104015.60, stdev=769042.08 00:10:16.784 clat (usec): min=845, max=49190, avg=13350.70, stdev=5690.20 00:10:16.784 lat (usec): min=5352, max=49215, avg=13454.72, stdev=5757.87 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 5473], 5.00th=[ 8094], 10.00th=[ 9372], 20.00th=[ 9765], 00:10:16.784 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11600], 60.00th=[12125], 00:10:16.784 | 70.00th=[13435], 80.00th=[15664], 90.00th=[20841], 95.00th=[26346], 00:10:16.784 | 99.00th=[32113], 99.50th=[32900], 99.90th=[39584], 99.95th=[39584], 00:10:16.784 | 99.99th=[49021] 00:10:16.784 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:10:16.784 slat (nsec): min=1864, max=50063k, avg=106703.22, stdev=1250341.43 00:10:16.784 clat (usec): min=476, max=61752, avg=14821.60, stdev=11855.39 00:10:16.784 lat (usec): min=483, max=61783, avg=14928.30, stdev=11917.90 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 2999], 5.00th=[ 6063], 10.00th=[ 8029], 20.00th=[ 9241], 00:10:16.784 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11600], 00:10:16.784 | 70.00th=[13042], 80.00th=[16450], 90.00th=[24249], 95.00th=[55837], 00:10:16.784 | 99.00th=[58983], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:10:16.784 | 99.99th=[61604] 00:10:16.784 bw ( KiB/s): min=18200, max=18664, per=26.68%, avg=18432.00, stdev=328.10, samples=2 00:10:16.784 iops : min= 4550, max= 4666, avg=4608.00, stdev=82.02, samples=2 00:10:16.784 lat (usec) : 500=0.04%, 750=0.07%, 1000=0.03% 00:10:16.784 lat (msec) : 2=0.21%, 4=1.30%, 10=35.40%, 20=47.22%, 50=12.92% 00:10:16.784 lat (msec) : 100=2.80% 00:10:16.784 cpu : usr=2.98%, sys=5.46%, ctx=316, majf=0, minf=1 00:10:16.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:16.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.784 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.784 job2: (groupid=0, jobs=1): err= 0: pid=1991406: Thu Oct 17 19:17:40 2024 00:10:16.784 read: IOPS=5299, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1002msec) 00:10:16.784 slat (nsec): min=1143, max=19180k, avg=97635.34, stdev=735413.04 00:10:16.784 clat (usec): min=758, max=60051, avg=12448.56, stdev=6357.47 00:10:16.784 lat (usec): min=3743, max=60061, avg=12546.20, stdev=6399.55 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 4883], 5.00th=[ 6652], 10.00th=[ 8455], 20.00th=[ 9110], 00:10:16.784 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:10:16.784 | 70.00th=[12256], 80.00th=[13435], 90.00th=[16057], 95.00th=[21627], 00:10:16.784 | 99.00th=[41681], 99.50th=[42730], 99.90th=[60031], 99.95th=[60031], 00:10:16.784 | 99.99th=[60031] 00:10:16.784 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:16.784 slat (nsec): min=1964, max=8824.6k, avg=75299.92, stdev=494071.79 00:10:16.784 clat (usec): min=1021, max=57016, avg=10820.49, stdev=4608.19 00:10:16.784 lat (usec): min=1029, max=57018, avg=10895.79, stdev=4627.75 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 2999], 5.00th=[ 5538], 10.00th=[ 6718], 20.00th=[ 8455], 00:10:16.784 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[11469], 00:10:16.784 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13566], 95.00th=[15533], 00:10:16.784 | 99.00th=[35390], 99.50th=[46400], 99.90th=[52691], 99.95th=[52691], 00:10:16.784 | 99.99th=[56886] 00:10:16.784 bw ( KiB/s): min=21736, max=23320, per=32.61%, avg=22528.00, stdev=1120.06, samples=2 00:10:16.784 iops : min= 5434, max= 5830, avg=5632.00, stdev=280.01, samples=2 00:10:16.784 lat (usec) : 1000=0.01% 00:10:16.784 lat (msec) : 2=0.11%, 4=1.12%, 10=36.05%, 20=58.31%, 50=4.20% 00:10:16.784 lat (msec) : 100=0.19% 00:10:16.784 cpu : usr=2.50%, sys=6.39%, ctx=456, majf=0, minf=2 00:10:16.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:16.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.784 issued rwts: total=5310,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.784 job3: (groupid=0, jobs=1): err= 0: pid=1991408: Thu Oct 17 19:17:40 2024 00:10:16.784 read: IOPS=3335, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1003msec) 00:10:16.784 slat (nsec): min=1731, max=26797k, avg=137587.70, stdev=1113620.08 00:10:16.784 clat (usec): min=2074, max=85291, avg=15406.80, stdev=7841.43 00:10:16.784 lat (usec): min=2080, max=85294, avg=15544.38, stdev=7954.99 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 5473], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10945], 00:10:16.784 | 30.00th=[11994], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:10:16.784 | 70.00th=[15008], 80.00th=[18744], 90.00th=[23462], 95.00th=[28181], 00:10:16.784 | 99.00th=[48497], 99.50th=[51119], 99.90th=[85459], 99.95th=[85459], 00:10:16.784 | 99.99th=[85459] 00:10:16.784 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:10:16.784 slat (usec): min=2, max=22177, avg=142.12, stdev=970.99 00:10:16.784 clat (usec): min=3162, max=88621, avg=21059.75, stdev=13163.53 00:10:16.784 lat (usec): min=3172, max=88641, avg=21201.88, stdev=13235.13 00:10:16.784 clat percentiles (usec): 00:10:16.784 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11469], 00:10:16.784 | 30.00th=[11731], 40.00th=[13698], 50.00th=[16450], 60.00th=[22938], 00:10:16.784 | 70.00th=[24249], 80.00th=[28443], 90.00th=[37487], 95.00th=[41681], 00:10:16.784 | 99.00th=[85459], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:10:16.784 | 99.99th=[88605] 00:10:16.784 bw ( KiB/s): min=12288, max=16384, per=20.75%, avg=14336.00, stdev=2896.31, samples=2 00:10:16.784 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:10:16.784 lat (msec) : 4=0.14%, 10=8.40%, 20=61.57%, 50=28.53%, 100=1.36% 00:10:16.784 cpu : usr=3.29%, sys=5.19%, ctx=269, majf=0, minf=1 00:10:16.784 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:16.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:16.784 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:16.784 issued rwts: total=3346,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:16.784 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:16.784 00:10:16.784 Run status group 0 (all jobs): 00:10:16.784 READ: bw=63.2MiB/s (66.2MB/s), 12.4MiB/s-20.7MiB/s (13.0MB/s-21.7MB/s), io=63.7MiB (66.8MB), run=1002-1008msec 00:10:16.784 WRITE: bw=67.5MiB/s (70.7MB/s), 13.9MiB/s-22.0MiB/s (14.6MB/s-23.0MB/s), io=68.0MiB (71.3MB), run=1002-1008msec 00:10:16.784 00:10:16.784 Disk stats (read/write): 00:10:16.784 nvme0n1: ios=2715/3072, merge=0/0, ticks=21726/35513, in_queue=57239, util=98.30% 00:10:16.784 nvme0n2: ios=3883/4096, merge=0/0, ticks=35368/30011, in_queue=65379, util=98.27% 00:10:16.784 nvme0n3: ios=4632/4735, merge=0/0, ticks=38739/33418, in_queue=72157, util=97.92% 00:10:16.784 nvme0n4: ios=2583/2791, merge=0/0, ticks=39651/63305, in_queue=102956, util=98.22% 00:10:16.784 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:16.784 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1991637 00:10:16.784 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:16.785 19:17:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:16.785 [global] 00:10:16.785 thread=1 00:10:16.785 invalidate=1 00:10:16.785 rw=read 00:10:16.785 time_based=1 00:10:16.785 runtime=10 00:10:16.785 ioengine=libaio 00:10:16.785 direct=1 00:10:16.785 bs=4096 00:10:16.785 iodepth=1 00:10:16.785 norandommap=1 00:10:16.785 numjobs=1 00:10:16.785 00:10:16.785 [job0] 00:10:16.785 filename=/dev/nvme0n1 00:10:16.785 [job1] 00:10:16.785 filename=/dev/nvme0n2 00:10:16.785 [job2] 00:10:16.785 filename=/dev/nvme0n3 00:10:16.785 [job3] 00:10:16.785 filename=/dev/nvme0n4 00:10:16.785 Could not set queue depth (nvme0n1) 00:10:16.785 Could not set queue depth (nvme0n2) 00:10:16.785 Could not set queue depth (nvme0n3) 00:10:16.785 Could not set queue depth (nvme0n4) 00:10:17.043 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.043 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.043 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.043 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.043 fio-3.35 00:10:17.043 Starting 4 threads 00:10:20.330 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:20.330 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:20.330 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=278528, buflen=4096 00:10:20.330 fio: pid=1991845, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.330 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:10:20.330 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.330 fio: pid=1991838, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.330 19:17:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:20.330 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13819904, buflen=4096 00:10:20.330 fio: pid=1991801, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.330 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.330 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:20.590 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:10:20.590 fio: pid=1991818, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:20.590 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.590 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:20.590 00:10:20.590 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1991801: Thu Oct 17 19:17:44 2024 00:10:20.590 read: IOPS=1072, BW=4289KiB/s (4391kB/s)(13.2MiB/3147msec) 00:10:20.590 slat (usec): min=6, max=23703, avg=19.55, stdev=502.03 00:10:20.590 clat (usec): min=164, max=42156, avg=905.57, stdev=5314.96 00:10:20.590 lat (usec): min=171, max=42178, avg=925.12, stdev=5339.25 00:10:20.590 clat percentiles (usec): 00:10:20.590 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:10:20.590 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 204], 00:10:20.590 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 245], 00:10:20.590 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:20.590 | 99.99th=[42206] 00:10:20.590 bw ( KiB/s): min= 96, max=18656, per=99.84%, avg=4282.17, stdev=7514.31, samples=6 00:10:20.590 iops : min= 24, max= 4664, avg=1070.50, stdev=1878.56, samples=6 00:10:20.590 lat (usec) : 250=95.38%, 500=2.79%, 750=0.03%, 1000=0.03% 00:10:20.590 lat (msec) : 4=0.03%, 50=1.72% 00:10:20.590 cpu : usr=0.19%, sys=1.05%, ctx=3378, majf=0, minf=1 00:10:20.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 issued rwts: total=3375,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.590 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1991818: Thu Oct 17 19:17:44 2024 00:10:20.590 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(324KiB/3353msec) 00:10:20.590 slat (usec): min=8, max=2823, avg=56.24, stdev=309.44 00:10:20.590 clat (usec): min=40769, max=42066, avg=41071.06, stdev=301.93 00:10:20.590 lat (usec): min=40791, max=44102, avg=41127.69, stdev=449.41 00:10:20.590 clat percentiles (usec): 00:10:20.590 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:20.590 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.590 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:10:20.590 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:20.590 | 99.99th=[42206] 00:10:20.590 bw ( KiB/s): min= 96, max= 100, per=2.24%, avg=96.67, stdev= 1.63, samples=6 00:10:20.590 iops : min= 24, max= 25, avg=24.17, stdev= 0.41, samples=6 00:10:20.590 lat (msec) : 50=98.78% 00:10:20.590 cpu : usr=0.09%, sys=0.00%, ctx=84, majf=0, minf=2 00:10:20.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.590 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1991838: Thu Oct 17 19:17:44 2024 00:10:20.590 read: IOPS=24, BW=97.8KiB/s (100kB/s)(288KiB/2946msec) 00:10:20.590 slat (usec): min=14, max=12911, avg=199.06, stdev=1508.59 00:10:20.590 clat (usec): min=430, max=41199, avg=40411.91, stdev=4778.54 00:10:20.590 lat (usec): min=461, max=53933, avg=40613.39, stdev=5035.20 00:10:20.590 clat percentiles (usec): 00:10:20.590 | 1.00th=[ 433], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:20.590 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.590 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.590 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.590 | 99.99th=[41157] 00:10:20.590 bw ( KiB/s): min= 96, max= 104, per=2.31%, avg=99.20, stdev= 4.38, samples=5 00:10:20.590 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:10:20.590 lat (usec) : 500=1.37% 00:10:20.590 lat (msec) : 50=97.26% 00:10:20.590 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:10:20.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.590 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1991845: Thu Oct 17 19:17:44 2024 00:10:20.590 read: IOPS=25, BW=99.7KiB/s (102kB/s)(272KiB/2728msec) 00:10:20.590 slat (nsec): min=11795, max=30566, avg=22467.30, stdev=2066.65 00:10:20.590 clat (usec): min=457, max=41171, avg=39775.05, stdev=6885.52 00:10:20.590 lat (usec): min=480, max=41193, avg=39797.50, stdev=6884.77 00:10:20.590 clat percentiles (usec): 00:10:20.590 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:20.590 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:20.590 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:20.590 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:20.590 | 99.99th=[41157] 00:10:20.590 bw ( KiB/s): min= 96, max= 112, per=2.33%, avg=100.80, stdev= 7.16, samples=5 00:10:20.590 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:10:20.590 lat (usec) : 500=1.45%, 750=1.45% 00:10:20.590 lat (msec) : 50=95.65% 00:10:20.590 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:10:20.590 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:20.590 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.590 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.590 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:20.590 00:10:20.590 Run status group 0 (all jobs): 00:10:20.590 READ: bw=4289KiB/s (4392kB/s), 96.6KiB/s-4289KiB/s (98.9kB/s-4391kB/s), io=14.0MiB (14.7MB), run=2728-3353msec 00:10:20.590 00:10:20.590 Disk stats (read/write): 00:10:20.590 nvme0n1: ios=3373/0, merge=0/0, ticks=2999/0, in_queue=2999, util=94.39% 00:10:20.590 nvme0n2: ios=82/0, merge=0/0, ticks=3338/0, in_queue=3338, util=95.95% 00:10:20.590 nvme0n3: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.07% 00:10:20.590 nvme0n4: ios=65/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.44% 00:10:20.850 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.850 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:20.850 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:20.850 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:21.108 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.108 19:17:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:21.367 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:21.367 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1991637 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:21.625 nvmf hotplug test: fio failed as expected 00:10:21.625 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.884 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.884 rmmod nvme_tcp 00:10:21.884 rmmod nvme_fabrics 00:10:21.885 rmmod nvme_keyring 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1988925 ']' 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1988925 ']' 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1988925' 00:10:22.143 killing process with pid 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1988925 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:22.143 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.144 19:17:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.684 19:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.684 00:10:24.684 real 0m26.981s 00:10:24.684 user 1m47.578s 00:10:24.684 sys 0m8.009s 00:10:24.684 19:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.684 19:17:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.684 ************************************ 00:10:24.684 END TEST nvmf_fio_target 00:10:24.684 ************************************ 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.684 ************************************ 00:10:24.684 START TEST nvmf_bdevio 00:10:24.684 ************************************ 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:24.684 * Looking for test storage... 00:10:24.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:24.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.684 --rc genhtml_branch_coverage=1 00:10:24.684 --rc genhtml_function_coverage=1 00:10:24.684 --rc genhtml_legend=1 00:10:24.684 --rc geninfo_all_blocks=1 00:10:24.684 --rc geninfo_unexecuted_blocks=1 00:10:24.684 00:10:24.684 ' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:24.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.684 --rc genhtml_branch_coverage=1 00:10:24.684 --rc genhtml_function_coverage=1 00:10:24.684 --rc genhtml_legend=1 00:10:24.684 --rc geninfo_all_blocks=1 00:10:24.684 --rc geninfo_unexecuted_blocks=1 00:10:24.684 00:10:24.684 ' 00:10:24.684 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:24.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.684 --rc genhtml_branch_coverage=1 00:10:24.684 --rc genhtml_function_coverage=1 00:10:24.684 --rc genhtml_legend=1 00:10:24.684 --rc geninfo_all_blocks=1 00:10:24.685 --rc geninfo_unexecuted_blocks=1 00:10:24.685 00:10:24.685 ' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:24.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.685 --rc genhtml_branch_coverage=1 00:10:24.685 --rc genhtml_function_coverage=1 00:10:24.685 --rc genhtml_legend=1 00:10:24.685 --rc geninfo_all_blocks=1 00:10:24.685 --rc geninfo_unexecuted_blocks=1 00:10:24.685 00:10:24.685 ' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.685 19:17:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.256 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:31.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:31.257 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:31.257 Found net devices under 0000:86:00.0: cvl_0_0 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:31.257 Found net devices under 0000:86:00.1: cvl_0_1 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:31.257 19:17:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:31.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:10:31.257 00:10:31.257 --- 10.0.0.2 ping statistics --- 00:10:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.257 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:31.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:10:31.257 00:10:31.257 --- 10.0.0.1 ping statistics --- 00:10:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.257 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1996253 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1996253 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1996253 ']' 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.257 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.257 [2024-10-17 19:17:54.327780] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:10:31.257 [2024-10-17 19:17:54.327822] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.257 [2024-10-17 19:17:54.405591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.257 [2024-10-17 19:17:54.447395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.257 [2024-10-17 19:17:54.447429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.257 [2024-10-17 19:17:54.447436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.258 [2024-10-17 19:17:54.447442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.258 [2024-10-17 19:17:54.447447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.258 [2024-10-17 19:17:54.449134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:31.258 [2024-10-17 19:17:54.449262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:31.258 [2024-10-17 19:17:54.449381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.258 [2024-10-17 19:17:54.449382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 [2024-10-17 19:17:54.585008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 Malloc0 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:31.258 [2024-10-17 19:17:54.650832] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:31.258 { 00:10:31.258 "params": { 00:10:31.258 "name": "Nvme$subsystem", 00:10:31.258 "trtype": "$TEST_TRANSPORT", 00:10:31.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:31.258 "adrfam": "ipv4", 00:10:31.258 "trsvcid": "$NVMF_PORT", 00:10:31.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:31.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:31.258 "hdgst": ${hdgst:-false}, 00:10:31.258 "ddgst": ${ddgst:-false} 00:10:31.258 }, 00:10:31.258 "method": "bdev_nvme_attach_controller" 00:10:31.258 } 00:10:31.258 EOF 00:10:31.258 )") 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:10:31.258 19:17:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:31.258 "params": { 00:10:31.258 "name": "Nvme1", 00:10:31.258 "trtype": "tcp", 00:10:31.258 "traddr": "10.0.0.2", 00:10:31.258 "adrfam": "ipv4", 00:10:31.258 "trsvcid": "4420", 00:10:31.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:31.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:31.258 "hdgst": false, 00:10:31.258 "ddgst": false 00:10:31.258 }, 00:10:31.258 "method": "bdev_nvme_attach_controller" 00:10:31.258 }' 00:10:31.258 [2024-10-17 19:17:54.702219] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:10:31.258 [2024-10-17 19:17:54.702262] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996283 ] 00:10:31.258 [2024-10-17 19:17:54.779895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:31.258 [2024-10-17 19:17:54.823776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.258 [2024-10-17 19:17:54.823881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.258 [2024-10-17 19:17:54.823882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.517 I/O targets: 00:10:31.517 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:31.517 00:10:31.517 00:10:31.517 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.517 http://cunit.sourceforge.net/ 00:10:31.517 00:10:31.517 00:10:31.517 Suite: bdevio tests on: Nvme1n1 00:10:31.517 Test: blockdev write read block ...passed 00:10:31.517 Test: blockdev write zeroes read block ...passed 00:10:31.517 Test: blockdev write zeroes read no split ...passed 00:10:31.517 Test: blockdev write zeroes read split ...passed 00:10:31.517 Test: blockdev write zeroes read split partial ...passed 00:10:31.517 Test: blockdev reset ...[2024-10-17 19:17:55.295614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:31.517 [2024-10-17 19:17:55.295677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fe3c0 (9): Bad file descriptor 00:10:31.776 [2024-10-17 19:17:55.349038] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:31.776 passed 00:10:31.776 Test: blockdev write read 8 blocks ...passed 00:10:31.776 Test: blockdev write read size > 128k ...passed 00:10:31.776 Test: blockdev write read invalid size ...passed 00:10:31.776 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.776 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.776 Test: blockdev write read max offset ...passed 00:10:31.776 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.776 Test: blockdev writev readv 8 blocks ...passed 00:10:31.776 Test: blockdev writev readv 30 x 1block ...passed 00:10:32.036 Test: blockdev writev readv block ...passed 00:10:32.036 Test: blockdev writev readv size > 128k ...passed 00:10:32.036 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:32.036 Test: blockdev comparev and writev ...[2024-10-17 19:17:55.602419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.602461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.602704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.602726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.602959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.602987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.603215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.603225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.603236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:32.036 [2024-10-17 19:17:55.603243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:32.036 passed 00:10:32.036 Test: blockdev nvme passthru rw ...passed 00:10:32.036 Test: blockdev nvme passthru vendor specific ...[2024-10-17 19:17:55.686951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.036 [2024-10-17 19:17:55.686966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.687069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.036 [2024-10-17 19:17:55.687078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.687185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.036 [2024-10-17 19:17:55.687195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:32.036 [2024-10-17 19:17:55.687292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:32.036 [2024-10-17 19:17:55.687302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:32.036 passed 00:10:32.036 Test: blockdev nvme admin passthru ...passed 00:10:32.036 Test: blockdev copy ...passed 00:10:32.036 00:10:32.036 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.036 suites 1 1 n/a 0 0 00:10:32.036 tests 23 23 23 0 0 00:10:32.036 asserts 152 152 152 0 n/a 00:10:32.036 00:10:32.036 Elapsed time = 1.225 seconds 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:32.295 rmmod nvme_tcp 00:10:32.295 rmmod nvme_fabrics 00:10:32.295 rmmod nvme_keyring 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1996253 ']' 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1996253 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1996253 ']' 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1996253 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:32.295 19:17:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1996253 00:10:32.295 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:10:32.295 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:10:32.295 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1996253' 00:10:32.295 killing process with pid 1996253 00:10:32.295 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1996253 00:10:32.295 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1996253 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.553 19:17:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.652 00:10:34.652 real 0m10.217s 00:10:34.652 user 0m11.171s 00:10:34.652 sys 0m5.036s 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:34.652 ************************************ 00:10:34.652 END TEST nvmf_bdevio 00:10:34.652 ************************************ 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:34.652 00:10:34.652 real 4m37.847s 00:10:34.652 user 10m25.075s 00:10:34.652 sys 1m36.977s 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.652 ************************************ 00:10:34.652 END TEST nvmf_target_core 00:10:34.652 ************************************ 00:10:34.652 19:17:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:34.652 19:17:58 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.652 19:17:58 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.652 19:17:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:34.652 ************************************ 00:10:34.652 START TEST nvmf_target_extra 00:10:34.652 ************************************ 00:10:34.652 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:34.911 * Looking for test storage... 00:10:34.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:34.911 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:34.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.912 --rc genhtml_branch_coverage=1 00:10:34.912 --rc genhtml_function_coverage=1 00:10:34.912 --rc genhtml_legend=1 00:10:34.912 --rc geninfo_all_blocks=1 00:10:34.912 --rc geninfo_unexecuted_blocks=1 00:10:34.912 00:10:34.912 ' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:34.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.912 --rc genhtml_branch_coverage=1 00:10:34.912 --rc genhtml_function_coverage=1 00:10:34.912 --rc genhtml_legend=1 00:10:34.912 --rc geninfo_all_blocks=1 00:10:34.912 --rc geninfo_unexecuted_blocks=1 00:10:34.912 00:10:34.912 ' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:34.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.912 --rc genhtml_branch_coverage=1 00:10:34.912 --rc genhtml_function_coverage=1 00:10:34.912 --rc genhtml_legend=1 00:10:34.912 --rc geninfo_all_blocks=1 00:10:34.912 --rc geninfo_unexecuted_blocks=1 00:10:34.912 00:10:34.912 ' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:34.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.912 --rc genhtml_branch_coverage=1 00:10:34.912 --rc genhtml_function_coverage=1 00:10:34.912 --rc genhtml_legend=1 00:10:34.912 --rc geninfo_all_blocks=1 00:10:34.912 --rc geninfo_unexecuted_blocks=1 00:10:34.912 00:10:34.912 ' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:34.912 ************************************ 00:10:34.912 START TEST nvmf_example 00:10:34.912 ************************************ 00:10:34.912 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:34.912 * Looking for test storage... 00:10:35.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.172 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.173 --rc genhtml_branch_coverage=1 00:10:35.173 --rc genhtml_function_coverage=1 00:10:35.173 --rc genhtml_legend=1 00:10:35.173 --rc geninfo_all_blocks=1 00:10:35.173 --rc geninfo_unexecuted_blocks=1 00:10:35.173 00:10:35.173 ' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.173 --rc genhtml_branch_coverage=1 00:10:35.173 --rc genhtml_function_coverage=1 00:10:35.173 --rc genhtml_legend=1 00:10:35.173 --rc geninfo_all_blocks=1 00:10:35.173 --rc geninfo_unexecuted_blocks=1 00:10:35.173 00:10:35.173 ' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.173 --rc genhtml_branch_coverage=1 00:10:35.173 --rc genhtml_function_coverage=1 00:10:35.173 --rc genhtml_legend=1 00:10:35.173 --rc geninfo_all_blocks=1 00:10:35.173 --rc geninfo_unexecuted_blocks=1 00:10:35.173 00:10:35.173 ' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:35.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.173 --rc genhtml_branch_coverage=1 00:10:35.173 --rc genhtml_function_coverage=1 00:10:35.173 --rc genhtml_legend=1 00:10:35.173 --rc geninfo_all_blocks=1 00:10:35.173 --rc geninfo_unexecuted_blocks=1 00:10:35.173 00:10:35.173 ' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:35.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.173 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:35.174 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:41.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:41.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:41.746 Found net devices under 0000:86:00.0: cvl_0_0 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:41.746 Found net devices under 0000:86:00.1: cvl_0_1 00:10:41.746 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:10:41.747 00:10:41.747 --- 10.0.0.2 ping statistics --- 00:10:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.747 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:41.747 00:10:41.747 --- 10.0.0.1 ping statistics --- 00:10:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.747 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2000315 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2000315 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2000315 ']' 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.747 19:18:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.006 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:42.266 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:54.477 Initializing NVMe Controllers 00:10:54.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.477 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:54.477 Initialization complete. Launching workers. 00:10:54.477 ======================================================== 00:10:54.477 Latency(us) 00:10:54.477 Device Information : IOPS MiB/s Average min max 00:10:54.477 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18226.80 71.20 3510.99 683.51 16032.90 00:10:54.477 ======================================================== 00:10:54.477 Total : 18226.80 71.20 3510.99 683.51 16032.90 00:10:54.477 00:10:54.477 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:54.477 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:54.477 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.478 rmmod nvme_tcp 00:10:54.478 rmmod nvme_fabrics 00:10:54.478 rmmod nvme_keyring 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 2000315 ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 2000315 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2000315 ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2000315 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2000315 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2000315' 00:10:54.478 killing process with pid 2000315 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2000315 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2000315 00:10:54.478 nvmf threads initialize successfully 00:10:54.478 bdev subsystem init successfully 00:10:54.478 created a nvmf target service 00:10:54.478 create targets's poll groups done 00:10:54.478 all subsystems of target started 00:10:54.478 nvmf target is running 00:10:54.478 all subsystems of target stopped 00:10:54.478 destroy targets's poll groups done 00:10:54.478 destroyed the nvmf target service 00:10:54.478 bdev subsystem finish successfully 00:10:54.478 nvmf threads destroy successfully 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.478 19:18:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.738 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.738 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:54.738 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.738 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.998 00:10:54.998 real 0m19.915s 00:10:54.998 user 0m46.346s 00:10:54.998 sys 0m6.048s 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.998 ************************************ 00:10:54.998 END TEST nvmf_example 00:10:54.998 ************************************ 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.998 ************************************ 00:10:54.998 START TEST nvmf_filesystem 00:10:54.998 ************************************ 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:54.998 * Looking for test storage... 00:10:54.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:54.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.998 --rc genhtml_branch_coverage=1 00:10:54.998 --rc genhtml_function_coverage=1 00:10:54.998 --rc genhtml_legend=1 00:10:54.998 --rc geninfo_all_blocks=1 00:10:54.998 --rc geninfo_unexecuted_blocks=1 00:10:54.998 00:10:54.998 ' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:54.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.998 --rc genhtml_branch_coverage=1 00:10:54.998 --rc genhtml_function_coverage=1 00:10:54.998 --rc genhtml_legend=1 00:10:54.998 --rc geninfo_all_blocks=1 00:10:54.998 --rc geninfo_unexecuted_blocks=1 00:10:54.998 00:10:54.998 ' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:54.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.998 --rc genhtml_branch_coverage=1 00:10:54.998 --rc genhtml_function_coverage=1 00:10:54.998 --rc genhtml_legend=1 00:10:54.998 --rc geninfo_all_blocks=1 00:10:54.998 --rc geninfo_unexecuted_blocks=1 00:10:54.998 00:10:54.998 ' 00:10:54.998 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:54.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.998 --rc genhtml_branch_coverage=1 00:10:54.998 --rc genhtml_function_coverage=1 00:10:54.998 --rc genhtml_legend=1 00:10:54.998 --rc geninfo_all_blocks=1 00:10:54.998 --rc geninfo_unexecuted_blocks=1 00:10:54.998 00:10:54.998 ' 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.261 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:55.262 #define SPDK_CONFIG_H 00:10:55.262 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:55.262 #define SPDK_CONFIG_APPS 1 00:10:55.262 #define SPDK_CONFIG_ARCH native 00:10:55.262 #undef SPDK_CONFIG_ASAN 00:10:55.262 #undef SPDK_CONFIG_AVAHI 00:10:55.262 #undef SPDK_CONFIG_CET 00:10:55.262 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:55.262 #define SPDK_CONFIG_COVERAGE 1 00:10:55.262 #define SPDK_CONFIG_CROSS_PREFIX 00:10:55.262 #undef SPDK_CONFIG_CRYPTO 00:10:55.262 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:55.262 #undef SPDK_CONFIG_CUSTOMOCF 00:10:55.262 #undef SPDK_CONFIG_DAOS 00:10:55.262 #define SPDK_CONFIG_DAOS_DIR 00:10:55.262 #define SPDK_CONFIG_DEBUG 1 00:10:55.262 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:55.262 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:55.262 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:55.262 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:55.262 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:55.262 #undef SPDK_CONFIG_DPDK_UADK 00:10:55.262 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:55.262 #define SPDK_CONFIG_EXAMPLES 1 00:10:55.262 #undef SPDK_CONFIG_FC 00:10:55.262 #define SPDK_CONFIG_FC_PATH 00:10:55.262 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:55.262 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:55.262 #define SPDK_CONFIG_FSDEV 1 00:10:55.262 #undef SPDK_CONFIG_FUSE 00:10:55.262 #undef SPDK_CONFIG_FUZZER 00:10:55.262 #define SPDK_CONFIG_FUZZER_LIB 00:10:55.262 #undef SPDK_CONFIG_GOLANG 00:10:55.262 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:55.262 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:55.262 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:55.262 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:55.262 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:55.262 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:55.262 #undef SPDK_CONFIG_HAVE_LZ4 00:10:55.262 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:55.262 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:55.262 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:55.262 #define SPDK_CONFIG_IDXD 1 00:10:55.262 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:55.262 #undef SPDK_CONFIG_IPSEC_MB 00:10:55.262 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:55.262 #define SPDK_CONFIG_ISAL 1 00:10:55.262 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:55.262 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:55.262 #define SPDK_CONFIG_LIBDIR 00:10:55.262 #undef SPDK_CONFIG_LTO 00:10:55.262 #define SPDK_CONFIG_MAX_LCORES 128 00:10:55.262 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:55.262 #define SPDK_CONFIG_NVME_CUSE 1 00:10:55.262 #undef SPDK_CONFIG_OCF 00:10:55.262 #define SPDK_CONFIG_OCF_PATH 00:10:55.262 #define SPDK_CONFIG_OPENSSL_PATH 00:10:55.262 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:55.262 #define SPDK_CONFIG_PGO_DIR 00:10:55.262 #undef SPDK_CONFIG_PGO_USE 00:10:55.262 #define SPDK_CONFIG_PREFIX /usr/local 00:10:55.262 #undef SPDK_CONFIG_RAID5F 00:10:55.262 #undef SPDK_CONFIG_RBD 00:10:55.262 #define SPDK_CONFIG_RDMA 1 00:10:55.262 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:55.262 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:55.262 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:55.262 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:55.262 #define SPDK_CONFIG_SHARED 1 00:10:55.262 #undef SPDK_CONFIG_SMA 00:10:55.262 #define SPDK_CONFIG_TESTS 1 00:10:55.262 #undef SPDK_CONFIG_TSAN 00:10:55.262 #define SPDK_CONFIG_UBLK 1 00:10:55.262 #define SPDK_CONFIG_UBSAN 1 00:10:55.262 #undef SPDK_CONFIG_UNIT_TESTS 00:10:55.262 #undef SPDK_CONFIG_URING 00:10:55.262 #define SPDK_CONFIG_URING_PATH 00:10:55.262 #undef SPDK_CONFIG_URING_ZNS 00:10:55.262 #undef SPDK_CONFIG_USDT 00:10:55.262 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:55.262 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:55.262 #define SPDK_CONFIG_VFIO_USER 1 00:10:55.262 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:55.262 #define SPDK_CONFIG_VHOST 1 00:10:55.262 #define SPDK_CONFIG_VIRTIO 1 00:10:55.262 #undef SPDK_CONFIG_VTUNE 00:10:55.262 #define SPDK_CONFIG_VTUNE_DIR 00:10:55.262 #define SPDK_CONFIG_WERROR 1 00:10:55.262 #define SPDK_CONFIG_WPDK_DIR 00:10:55.262 #undef SPDK_CONFIG_XNVME 00:10:55.262 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:55.262 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:55.263 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j96 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2003039 ]] 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2003039 00:10:55.264 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.oxztaG 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.oxztaG/tests/target /tmp/spdk.oxztaG 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=606707712 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=4677722112 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=189152501760 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=195963949056 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6811447296 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97971941376 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981972480 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=39169748992 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=39192793088 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23044096 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=97981186048 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=97981976576 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=790528 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=19596382208 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=19596394496 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:55.265 * Looking for test storage... 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=189152501760 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9026039808 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.265 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.266 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.266 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.266 19:18:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.266 --rc genhtml_branch_coverage=1 00:10:55.266 --rc genhtml_function_coverage=1 00:10:55.266 --rc genhtml_legend=1 00:10:55.266 --rc geninfo_all_blocks=1 00:10:55.266 --rc geninfo_unexecuted_blocks=1 00:10:55.266 00:10:55.266 ' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.266 --rc genhtml_branch_coverage=1 00:10:55.266 --rc genhtml_function_coverage=1 00:10:55.266 --rc genhtml_legend=1 00:10:55.266 --rc geninfo_all_blocks=1 00:10:55.266 --rc geninfo_unexecuted_blocks=1 00:10:55.266 00:10:55.266 ' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.266 --rc genhtml_branch_coverage=1 00:10:55.266 --rc genhtml_function_coverage=1 00:10:55.266 --rc genhtml_legend=1 00:10:55.266 --rc geninfo_all_blocks=1 00:10:55.266 --rc geninfo_unexecuted_blocks=1 00:10:55.266 00:10:55.266 ' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.266 --rc genhtml_branch_coverage=1 00:10:55.266 --rc genhtml_function_coverage=1 00:10:55.266 --rc genhtml_legend=1 00:10:55.266 --rc geninfo_all_blocks=1 00:10:55.266 --rc geninfo_unexecuted_blocks=1 00:10:55.266 00:10:55.266 ' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.266 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.526 19:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:02.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:02.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:02.094 Found net devices under 0000:86:00.0: cvl_0_0 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.094 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:02.095 Found net devices under 0000:86:00.1: cvl_0_1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:02.095 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:02.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.474 ms 00:11:02.095 00:11:02.095 --- 10.0.0.2 ping statistics --- 00:11:02.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.095 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:11:02.095 00:11:02.095 --- 10.0.0.1 ping statistics --- 00:11:02.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.095 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:02.095 ************************************ 00:11:02.095 START TEST nvmf_filesystem_no_in_capsule 00:11:02.095 ************************************ 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2006278 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2006278 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2006278 ']' 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.095 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.095 [2024-10-17 19:18:25.196824] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:11:02.095 [2024-10-17 19:18:25.196867] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.095 [2024-10-17 19:18:25.277530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.095 [2024-10-17 19:18:25.319676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.095 [2024-10-17 19:18:25.319711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.095 [2024-10-17 19:18:25.319718] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.095 [2024-10-17 19:18:25.319728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.095 [2024-10-17 19:18:25.319733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.095 [2024-10-17 19:18:25.321315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.095 [2024-10-17 19:18:25.321333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.095 [2024-10-17 19:18:25.321420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.095 [2024-10-17 19:18:25.321421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.355 [2024-10-17 19:18:26.080347] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.355 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.614 Malloc1 00:11:02.614 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.614 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:02.614 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.614 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 [2024-10-17 19:18:26.231916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:02.615 { 00:11:02.615 "name": "Malloc1", 00:11:02.615 "aliases": [ 00:11:02.615 "8cab7554-913a-43e2-a9f4-aaab340c92c0" 00:11:02.615 ], 00:11:02.615 "product_name": "Malloc disk", 00:11:02.615 "block_size": 512, 00:11:02.615 "num_blocks": 1048576, 00:11:02.615 "uuid": "8cab7554-913a-43e2-a9f4-aaab340c92c0", 00:11:02.615 "assigned_rate_limits": { 00:11:02.615 "rw_ios_per_sec": 0, 00:11:02.615 "rw_mbytes_per_sec": 0, 00:11:02.615 "r_mbytes_per_sec": 0, 00:11:02.615 "w_mbytes_per_sec": 0 00:11:02.615 }, 00:11:02.615 "claimed": true, 00:11:02.615 "claim_type": "exclusive_write", 00:11:02.615 "zoned": false, 00:11:02.615 "supported_io_types": { 00:11:02.615 "read": true, 00:11:02.615 "write": true, 00:11:02.615 "unmap": true, 00:11:02.615 "flush": true, 00:11:02.615 "reset": true, 00:11:02.615 "nvme_admin": false, 00:11:02.615 "nvme_io": false, 00:11:02.615 "nvme_io_md": false, 00:11:02.615 "write_zeroes": true, 00:11:02.615 "zcopy": true, 00:11:02.615 "get_zone_info": false, 00:11:02.615 "zone_management": false, 00:11:02.615 "zone_append": false, 00:11:02.615 "compare": false, 00:11:02.615 "compare_and_write": false, 00:11:02.615 "abort": true, 00:11:02.615 "seek_hole": false, 00:11:02.615 "seek_data": false, 00:11:02.615 "copy": true, 00:11:02.615 "nvme_iov_md": false 00:11:02.615 }, 00:11:02.615 "memory_domains": [ 00:11:02.615 { 00:11:02.615 "dma_device_id": "system", 00:11:02.615 "dma_device_type": 1 00:11:02.615 }, 00:11:02.615 { 00:11:02.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.615 "dma_device_type": 2 00:11:02.615 } 00:11:02.615 ], 00:11:02.615 "driver_specific": {} 00:11:02.615 } 00:11:02.615 ]' 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:02.615 19:18:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.991 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.991 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:03.991 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.991 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:03.991 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:05.897 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:06.465 19:18:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:06.723 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.660 ************************************ 00:11:07.660 START TEST filesystem_ext4 00:11:07.660 ************************************ 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:07.660 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:07.660 mke2fs 1.47.0 (5-Feb-2023) 00:11:07.919 Discarding device blocks: 0/522240 done 00:11:07.919 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:07.919 Filesystem UUID: 3541dc06-8b78-425a-835c-cebd7799ac23 00:11:07.919 Superblock backups stored on blocks: 00:11:07.919 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:07.919 00:11:07.919 Allocating group tables: 0/64 done 00:11:07.919 Writing inode tables: 0/64 done 00:11:08.178 Creating journal (8192 blocks): done 00:11:10.126 Writing superblocks and filesystem accounting information: 0/64 done 00:11:10.126 00:11:10.126 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:10.126 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2006278 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.694 00:11:16.694 real 0m8.474s 00:11:16.694 user 0m0.024s 00:11:16.694 sys 0m0.077s 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:16.694 ************************************ 00:11:16.694 END TEST filesystem_ext4 00:11:16.694 ************************************ 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.694 ************************************ 00:11:16.694 START TEST filesystem_btrfs 00:11:16.694 ************************************ 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:16.694 19:18:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:16.694 btrfs-progs v6.8.1 00:11:16.694 See https://btrfs.readthedocs.io for more information. 00:11:16.694 00:11:16.695 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:16.695 NOTE: several default settings have changed in version 5.15, please make sure 00:11:16.695 this does not affect your deployments: 00:11:16.695 - DUP for metadata (-m dup) 00:11:16.695 - enabled no-holes (-O no-holes) 00:11:16.695 - enabled free-space-tree (-R free-space-tree) 00:11:16.695 00:11:16.695 Label: (null) 00:11:16.695 UUID: 880bf52f-d31a-41c4-bf2c-06a1995de099 00:11:16.695 Node size: 16384 00:11:16.695 Sector size: 4096 (CPU page size: 4096) 00:11:16.695 Filesystem size: 510.00MiB 00:11:16.695 Block group profiles: 00:11:16.695 Data: single 8.00MiB 00:11:16.695 Metadata: DUP 32.00MiB 00:11:16.695 System: DUP 8.00MiB 00:11:16.695 SSD detected: yes 00:11:16.695 Zoned device: no 00:11:16.695 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:16.695 Checksum: crc32c 00:11:16.695 Number of devices: 1 00:11:16.695 Devices: 00:11:16.695 ID SIZE PATH 00:11:16.695 1 510.00MiB /dev/nvme0n1p1 00:11:16.695 00:11:16.695 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:16.695 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2006278 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:16.954 00:11:16.954 real 0m0.617s 00:11:16.954 user 0m0.019s 00:11:16.954 sys 0m0.123s 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:16.954 ************************************ 00:11:16.954 END TEST filesystem_btrfs 00:11:16.954 ************************************ 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:16.954 ************************************ 00:11:16.954 START TEST filesystem_xfs 00:11:16.954 ************************************ 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:16.954 19:18:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:16.954 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:16.954 = sectsz=512 attr=2, projid32bit=1 00:11:16.954 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:16.954 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:16.954 data = bsize=4096 blocks=130560, imaxpct=25 00:11:16.954 = sunit=0 swidth=0 blks 00:11:16.954 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:16.954 log =internal log bsize=4096 blocks=16384, version=2 00:11:16.955 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:16.955 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:18.332 Discarding blocks...Done. 00:11:18.332 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:18.332 19:18:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2006278 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:20.867 00:11:20.867 real 0m3.584s 00:11:20.867 user 0m0.025s 00:11:20.867 sys 0m0.074s 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:20.867 ************************************ 00:11:20.867 END TEST filesystem_xfs 00:11:20.867 ************************************ 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:20.867 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2006278 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2006278 ']' 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2006278 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2006278 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2006278' 00:11:20.868 killing process with pid 2006278 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2006278 00:11:20.868 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2006278 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:21.126 00:11:21.126 real 0m19.699s 00:11:21.126 user 1m17.710s 00:11:21.126 sys 0m1.512s 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.126 ************************************ 00:11:21.126 END TEST nvmf_filesystem_no_in_capsule 00:11:21.126 ************************************ 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.126 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.126 ************************************ 00:11:21.126 START TEST nvmf_filesystem_in_capsule 00:11:21.126 ************************************ 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=2009733 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 2009733 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2009733 ']' 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.385 19:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.385 [2024-10-17 19:18:44.972735] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:11:21.385 [2024-10-17 19:18:44.972781] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.385 [2024-10-17 19:18:45.052184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.385 [2024-10-17 19:18:45.094149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.385 [2024-10-17 19:18:45.094185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.385 [2024-10-17 19:18:45.094193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.385 [2024-10-17 19:18:45.094199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.385 [2024-10-17 19:18:45.094204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.385 [2024-10-17 19:18:45.095592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.385 [2024-10-17 19:18:45.095726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.385 [2024-10-17 19:18:45.095760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.385 [2024-10-17 19:18:45.095760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 [2024-10-17 19:18:45.866585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 Malloc1 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.319 19:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.319 [2024-10-17 19:18:46.017570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:22.319 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:22.320 { 00:11:22.320 "name": "Malloc1", 00:11:22.320 "aliases": [ 00:11:22.320 "ec249a61-fe52-4335-a866-9dc55bbbff9c" 00:11:22.320 ], 00:11:22.320 "product_name": "Malloc disk", 00:11:22.320 "block_size": 512, 00:11:22.320 "num_blocks": 1048576, 00:11:22.320 "uuid": "ec249a61-fe52-4335-a866-9dc55bbbff9c", 00:11:22.320 "assigned_rate_limits": { 00:11:22.320 "rw_ios_per_sec": 0, 00:11:22.320 "rw_mbytes_per_sec": 0, 00:11:22.320 "r_mbytes_per_sec": 0, 00:11:22.320 "w_mbytes_per_sec": 0 00:11:22.320 }, 00:11:22.320 "claimed": true, 00:11:22.320 "claim_type": "exclusive_write", 00:11:22.320 "zoned": false, 00:11:22.320 "supported_io_types": { 00:11:22.320 "read": true, 00:11:22.320 "write": true, 00:11:22.320 "unmap": true, 00:11:22.320 "flush": true, 00:11:22.320 "reset": true, 00:11:22.320 "nvme_admin": false, 00:11:22.320 "nvme_io": false, 00:11:22.320 "nvme_io_md": false, 00:11:22.320 "write_zeroes": true, 00:11:22.320 "zcopy": true, 00:11:22.320 "get_zone_info": false, 00:11:22.320 "zone_management": false, 00:11:22.320 "zone_append": false, 00:11:22.320 "compare": false, 00:11:22.320 "compare_and_write": false, 00:11:22.320 "abort": true, 00:11:22.320 "seek_hole": false, 00:11:22.320 "seek_data": false, 00:11:22.320 "copy": true, 00:11:22.320 "nvme_iov_md": false 00:11:22.320 }, 00:11:22.320 "memory_domains": [ 00:11:22.320 { 00:11:22.320 "dma_device_id": "system", 00:11:22.320 "dma_device_type": 1 00:11:22.320 }, 00:11:22.320 { 00:11:22.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.320 "dma_device_type": 2 00:11:22.320 } 00:11:22.320 ], 00:11:22.320 "driver_specific": {} 00:11:22.320 } 00:11:22.320 ]' 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:22.320 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:22.578 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:22.578 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:22.578 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:22.578 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:22.578 19:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.956 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.956 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.956 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.956 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:23.956 19:18:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:25.859 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:26.426 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:27.364 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:27.364 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:27.364 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:27.364 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.364 19:18:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.364 ************************************ 00:11:27.364 START TEST filesystem_in_capsule_ext4 00:11:27.364 ************************************ 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:27.364 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:27.364 mke2fs 1.47.0 (5-Feb-2023) 00:11:27.364 Discarding device blocks: 0/522240 done 00:11:27.364 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:27.364 Filesystem UUID: 79858dd5-e935-47a5-8969-7cbab97b20a9 00:11:27.364 Superblock backups stored on blocks: 00:11:27.364 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:27.364 00:11:27.364 Allocating group tables: 0/64 done 00:11:27.364 Writing inode tables: 0/64 done 00:11:27.622 Creating journal (8192 blocks): done 00:11:27.622 Writing superblocks and filesystem accounting information: 0/64 done 00:11:27.622 00:11:27.622 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:27.622 19:18:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2009733 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.892 00:11:32.892 real 0m5.641s 00:11:32.892 user 0m0.018s 00:11:32.892 sys 0m0.080s 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.892 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:32.892 ************************************ 00:11:32.892 END TEST filesystem_in_capsule_ext4 00:11:32.892 ************************************ 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.151 ************************************ 00:11:33.151 START TEST filesystem_in_capsule_btrfs 00:11:33.151 ************************************ 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:33.151 btrfs-progs v6.8.1 00:11:33.151 See https://btrfs.readthedocs.io for more information. 00:11:33.151 00:11:33.151 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:33.151 NOTE: several default settings have changed in version 5.15, please make sure 00:11:33.151 this does not affect your deployments: 00:11:33.151 - DUP for metadata (-m dup) 00:11:33.151 - enabled no-holes (-O no-holes) 00:11:33.151 - enabled free-space-tree (-R free-space-tree) 00:11:33.151 00:11:33.151 Label: (null) 00:11:33.151 UUID: c296669d-81d9-4e2a-9a4e-63067dab3421 00:11:33.151 Node size: 16384 00:11:33.151 Sector size: 4096 (CPU page size: 4096) 00:11:33.151 Filesystem size: 510.00MiB 00:11:33.151 Block group profiles: 00:11:33.151 Data: single 8.00MiB 00:11:33.151 Metadata: DUP 32.00MiB 00:11:33.151 System: DUP 8.00MiB 00:11:33.151 SSD detected: yes 00:11:33.151 Zoned device: no 00:11:33.151 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:33.151 Checksum: crc32c 00:11:33.151 Number of devices: 1 00:11:33.151 Devices: 00:11:33.151 ID SIZE PATH 00:11:33.151 1 510.00MiB /dev/nvme0n1p1 00:11:33.151 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:33.151 19:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:34.526 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:34.526 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2009733 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:34.527 00:11:34.527 real 0m1.239s 00:11:34.527 user 0m0.034s 00:11:34.527 sys 0m0.107s 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.527 19:18:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:34.527 ************************************ 00:11:34.527 END TEST filesystem_in_capsule_btrfs 00:11:34.527 ************************************ 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.527 ************************************ 00:11:34.527 START TEST filesystem_in_capsule_xfs 00:11:34.527 ************************************ 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:34.527 19:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.785 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.785 = sectsz=512 attr=2, projid32bit=1 00:11:34.785 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.785 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.785 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.785 = sunit=0 swidth=0 blks 00:11:34.785 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.785 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.785 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.785 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:35.721 Discarding blocks...Done. 00:11:35.721 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:35.721 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.256 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.257 00:11:38.257 real 0m3.560s 00:11:38.257 user 0m0.021s 00:11:38.257 sys 0m0.078s 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.257 ************************************ 00:11:38.257 END TEST filesystem_in_capsule_xfs 00:11:38.257 ************************************ 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2009733 ']' 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2009733' 00:11:38.257 killing process with pid 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2009733 00:11:38.257 19:19:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2009733 00:11:38.517 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:38.517 00:11:38.517 real 0m17.369s 00:11:38.517 user 1m8.497s 00:11:38.517 sys 0m1.443s 00:11:38.517 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:38.517 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.517 ************************************ 00:11:38.517 END TEST nvmf_filesystem_in_capsule 00:11:38.517 ************************************ 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.776 rmmod nvme_tcp 00:11:38.776 rmmod nvme_fabrics 00:11:38.776 rmmod nvme_keyring 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:38.776 19:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.706 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.977 00:11:40.977 real 0m45.876s 00:11:40.977 user 2m28.209s 00:11:40.977 sys 0m7.768s 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:40.977 ************************************ 00:11:40.977 END TEST nvmf_filesystem 00:11:40.977 ************************************ 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.977 ************************************ 00:11:40.977 START TEST nvmf_target_discovery 00:11:40.977 ************************************ 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:40.977 * Looking for test storage... 00:11:40.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.977 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.977 --rc genhtml_branch_coverage=1 00:11:40.977 --rc genhtml_function_coverage=1 00:11:40.978 --rc genhtml_legend=1 00:11:40.978 --rc geninfo_all_blocks=1 00:11:40.978 --rc geninfo_unexecuted_blocks=1 00:11:40.978 00:11:40.978 ' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.978 --rc genhtml_branch_coverage=1 00:11:40.978 --rc genhtml_function_coverage=1 00:11:40.978 --rc genhtml_legend=1 00:11:40.978 --rc geninfo_all_blocks=1 00:11:40.978 --rc geninfo_unexecuted_blocks=1 00:11:40.978 00:11:40.978 ' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.978 --rc genhtml_branch_coverage=1 00:11:40.978 --rc genhtml_function_coverage=1 00:11:40.978 --rc genhtml_legend=1 00:11:40.978 --rc geninfo_all_blocks=1 00:11:40.978 --rc geninfo_unexecuted_blocks=1 00:11:40.978 00:11:40.978 ' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.978 --rc genhtml_branch_coverage=1 00:11:40.978 --rc genhtml_function_coverage=1 00:11:40.978 --rc genhtml_legend=1 00:11:40.978 --rc geninfo_all_blocks=1 00:11:40.978 --rc geninfo_unexecuted_blocks=1 00:11:40.978 00:11:40.978 ' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.978 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:41.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:41.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.284 19:19:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:47.866 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:47.866 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:47.866 Found net devices under 0000:86:00.0: cvl_0_0 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.866 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:47.867 Found net devices under 0000:86:00.1: cvl_0_1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:11:47.867 00:11:47.867 --- 10.0.0.2 ping statistics --- 00:11:47.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.867 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:11:47.867 00:11:47.867 --- 10.0.0.1 ping statistics --- 00:11:47.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.867 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=2016253 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 2016253 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2016253 ']' 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:47.867 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 [2024-10-17 19:19:10.824586] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:11:47.867 [2024-10-17 19:19:10.824642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.867 [2024-10-17 19:19:10.903767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.867 [2024-10-17 19:19:10.946124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.867 [2024-10-17 19:19:10.946160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.867 [2024-10-17 19:19:10.946167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.867 [2024-10-17 19:19:10.946177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.867 [2024-10-17 19:19:10.946182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.867 [2024-10-17 19:19:10.947716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.867 [2024-10-17 19:19:10.947822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.867 [2024-10-17 19:19:10.947930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.867 [2024-10-17 19:19:10.947931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 [2024-10-17 19:19:11.088748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 Null1 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.867 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.867 [2024-10-17 19:19:11.134117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 Null2 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 Null3 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 Null4 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:47.868 00:11:47.868 Discovery Log Number of Records 6, Generation counter 6 00:11:47.868 =====Discovery Log Entry 0====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: current discovery subsystem 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4420 00:11:47.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: explicit discovery connections, duplicate discovery information 00:11:47.868 sectype: none 00:11:47.868 =====Discovery Log Entry 1====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: nvme subsystem 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4420 00:11:47.868 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: none 00:11:47.868 sectype: none 00:11:47.868 =====Discovery Log Entry 2====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: nvme subsystem 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4420 00:11:47.868 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: none 00:11:47.868 sectype: none 00:11:47.868 =====Discovery Log Entry 3====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: nvme subsystem 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4420 00:11:47.868 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: none 00:11:47.868 sectype: none 00:11:47.868 =====Discovery Log Entry 4====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: nvme subsystem 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4420 00:11:47.868 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: none 00:11:47.868 sectype: none 00:11:47.868 =====Discovery Log Entry 5====== 00:11:47.868 trtype: tcp 00:11:47.868 adrfam: ipv4 00:11:47.868 subtype: discovery subsystem referral 00:11:47.868 treq: not required 00:11:47.868 portid: 0 00:11:47.868 trsvcid: 4430 00:11:47.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:47.868 traddr: 10.0.0.2 00:11:47.868 eflags: none 00:11:47.868 sectype: none 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:47.868 Perform nvmf subsystem discovery via RPC 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.868 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.868 [ 00:11:47.868 { 00:11:47.868 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:47.868 "subtype": "Discovery", 00:11:47.868 "listen_addresses": [ 00:11:47.868 { 00:11:47.868 "trtype": "TCP", 00:11:47.868 "adrfam": "IPv4", 00:11:47.868 "traddr": "10.0.0.2", 00:11:47.868 "trsvcid": "4420" 00:11:47.868 } 00:11:47.868 ], 00:11:47.868 "allow_any_host": true, 00:11:47.868 "hosts": [] 00:11:47.868 }, 00:11:47.868 { 00:11:47.868 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.868 "subtype": "NVMe", 00:11:47.868 "listen_addresses": [ 00:11:47.868 { 00:11:47.868 "trtype": "TCP", 00:11:47.868 "adrfam": "IPv4", 00:11:47.868 "traddr": "10.0.0.2", 00:11:47.868 "trsvcid": "4420" 00:11:47.868 } 00:11:47.868 ], 00:11:47.868 "allow_any_host": true, 00:11:47.868 "hosts": [], 00:11:47.868 "serial_number": "SPDK00000000000001", 00:11:47.868 "model_number": "SPDK bdev Controller", 00:11:47.868 "max_namespaces": 32, 00:11:47.868 "min_cntlid": 1, 00:11:47.868 "max_cntlid": 65519, 00:11:47.868 "namespaces": [ 00:11:47.868 { 00:11:47.868 "nsid": 1, 00:11:47.868 "bdev_name": "Null1", 00:11:47.868 "name": "Null1", 00:11:47.868 "nguid": "F89BF93019FB4FF49CEE133574854B5D", 00:11:47.868 "uuid": "f89bf930-19fb-4ff4-9cee-133574854b5d" 00:11:47.868 } 00:11:47.869 ] 00:11:47.869 }, 00:11:47.869 { 00:11:47.869 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.869 "subtype": "NVMe", 00:11:47.869 "listen_addresses": [ 00:11:47.869 { 00:11:47.869 "trtype": "TCP", 00:11:47.869 "adrfam": "IPv4", 00:11:47.869 "traddr": "10.0.0.2", 00:11:47.869 "trsvcid": "4420" 00:11:47.869 } 00:11:47.869 ], 00:11:47.869 "allow_any_host": true, 00:11:47.869 "hosts": [], 00:11:47.869 "serial_number": "SPDK00000000000002", 00:11:47.869 "model_number": "SPDK bdev Controller", 00:11:47.869 "max_namespaces": 32, 00:11:47.869 "min_cntlid": 1, 00:11:47.869 "max_cntlid": 65519, 00:11:47.869 "namespaces": [ 00:11:47.869 { 00:11:47.869 "nsid": 1, 00:11:47.869 "bdev_name": "Null2", 00:11:47.869 "name": "Null2", 00:11:47.869 "nguid": "01FF467C252D4EA58C9C4E9B30CFB30E", 00:11:47.869 "uuid": "01ff467c-252d-4ea5-8c9c-4e9b30cfb30e" 00:11:47.869 } 00:11:47.869 ] 00:11:47.869 }, 00:11:47.869 { 00:11:47.869 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:47.869 "subtype": "NVMe", 00:11:47.869 "listen_addresses": [ 00:11:47.869 { 00:11:47.869 "trtype": "TCP", 00:11:47.869 "adrfam": "IPv4", 00:11:47.869 "traddr": "10.0.0.2", 00:11:47.869 "trsvcid": "4420" 00:11:47.869 } 00:11:47.869 ], 00:11:47.869 "allow_any_host": true, 00:11:47.869 "hosts": [], 00:11:47.869 "serial_number": "SPDK00000000000003", 00:11:47.869 "model_number": "SPDK bdev Controller", 00:11:47.869 "max_namespaces": 32, 00:11:47.869 "min_cntlid": 1, 00:11:47.869 "max_cntlid": 65519, 00:11:47.869 "namespaces": [ 00:11:47.869 { 00:11:47.869 "nsid": 1, 00:11:47.869 "bdev_name": "Null3", 00:11:47.869 "name": "Null3", 00:11:47.869 "nguid": "68768B90F7B643F1AC635BD8BCF10126", 00:11:47.869 "uuid": "68768b90-f7b6-43f1-ac63-5bd8bcf10126" 00:11:47.869 } 00:11:47.869 ] 00:11:47.869 }, 00:11:47.869 { 00:11:47.869 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:47.869 "subtype": "NVMe", 00:11:47.869 "listen_addresses": [ 00:11:47.869 { 00:11:47.869 "trtype": "TCP", 00:11:47.869 "adrfam": "IPv4", 00:11:47.869 "traddr": "10.0.0.2", 00:11:47.869 "trsvcid": "4420" 00:11:47.869 } 00:11:47.869 ], 00:11:47.869 "allow_any_host": true, 00:11:47.869 "hosts": [], 00:11:47.869 "serial_number": "SPDK00000000000004", 00:11:47.869 "model_number": "SPDK bdev Controller", 00:11:47.869 "max_namespaces": 32, 00:11:47.869 "min_cntlid": 1, 00:11:47.869 "max_cntlid": 65519, 00:11:47.869 "namespaces": [ 00:11:47.869 { 00:11:47.869 "nsid": 1, 00:11:47.869 "bdev_name": "Null4", 00:11:47.869 "name": "Null4", 00:11:47.869 "nguid": "F5037D0BA31B4927A0D88021DBD72C90", 00:11:47.869 "uuid": "f5037d0b-a31b-4927-a0d8-8021dbd72c90" 00:11:47.869 } 00:11:47.869 ] 00:11:47.869 } 00:11:47.869 ] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.869 rmmod nvme_tcp 00:11:47.869 rmmod nvme_fabrics 00:11:47.869 rmmod nvme_keyring 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 2016253 ']' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 2016253 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2016253 ']' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2016253 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.869 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2016253 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2016253' 00:11:48.129 killing process with pid 2016253 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2016253 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2016253 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.129 19:19:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.665 00:11:50.665 real 0m9.348s 00:11:50.665 user 0m5.386s 00:11:50.665 sys 0m4.875s 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:50.665 ************************************ 00:11:50.665 END TEST nvmf_target_discovery 00:11:50.665 ************************************ 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.665 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.666 19:19:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.666 ************************************ 00:11:50.666 START TEST nvmf_referrals 00:11:50.666 ************************************ 00:11:50.666 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:50.666 * Looking for test storage... 00:11:50.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.666 --rc genhtml_branch_coverage=1 00:11:50.666 --rc genhtml_function_coverage=1 00:11:50.666 --rc genhtml_legend=1 00:11:50.666 --rc geninfo_all_blocks=1 00:11:50.666 --rc geninfo_unexecuted_blocks=1 00:11:50.666 00:11:50.666 ' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.666 --rc genhtml_branch_coverage=1 00:11:50.666 --rc genhtml_function_coverage=1 00:11:50.666 --rc genhtml_legend=1 00:11:50.666 --rc geninfo_all_blocks=1 00:11:50.666 --rc geninfo_unexecuted_blocks=1 00:11:50.666 00:11:50.666 ' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.666 --rc genhtml_branch_coverage=1 00:11:50.666 --rc genhtml_function_coverage=1 00:11:50.666 --rc genhtml_legend=1 00:11:50.666 --rc geninfo_all_blocks=1 00:11:50.666 --rc geninfo_unexecuted_blocks=1 00:11:50.666 00:11:50.666 ' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:50.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.666 --rc genhtml_branch_coverage=1 00:11:50.666 --rc genhtml_function_coverage=1 00:11:50.666 --rc genhtml_legend=1 00:11:50.666 --rc geninfo_all_blocks=1 00:11:50.666 --rc geninfo_unexecuted_blocks=1 00:11:50.666 00:11:50.666 ' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:50.666 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.667 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:57.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:57.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:57.241 Found net devices under 0000:86:00.0: cvl_0_0 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:57.241 Found net devices under 0000:86:00.1: cvl_0_1 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:57.241 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.242 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:11:57.242 00:11:57.242 --- 10.0.0.2 ping statistics --- 00:11:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.242 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:11:57.242 00:11:57.242 --- 10.0.0.1 ping statistics --- 00:11:57.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.242 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=2020046 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 2020046 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2020046 ']' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 [2024-10-17 19:19:20.205700] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:11:57.242 [2024-10-17 19:19:20.205748] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.242 [2024-10-17 19:19:20.285381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.242 [2024-10-17 19:19:20.326081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.242 [2024-10-17 19:19:20.326119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.242 [2024-10-17 19:19:20.326126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.242 [2024-10-17 19:19:20.326133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.242 [2024-10-17 19:19:20.326139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.242 [2024-10-17 19:19:20.327676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.242 [2024-10-17 19:19:20.327784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.242 [2024-10-17 19:19:20.327893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.242 [2024-10-17 19:19:20.327895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 [2024-10-17 19:19:20.476420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 [2024-10-17 19:19:20.489803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.242 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.243 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.502 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:57.762 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.021 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.280 19:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.538 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:58.539 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.798 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.798 rmmod nvme_tcp 00:11:58.798 rmmod nvme_fabrics 00:11:58.798 rmmod nvme_keyring 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 2020046 ']' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2020046 ']' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2020046' 00:11:59.057 killing process with pid 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2020046 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.057 19:19:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.594 00:12:01.594 real 0m10.921s 00:12:01.594 user 0m12.443s 00:12:01.594 sys 0m5.294s 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:01.594 ************************************ 00:12:01.594 END TEST nvmf_referrals 00:12:01.594 ************************************ 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.594 ************************************ 00:12:01.594 START TEST nvmf_connect_disconnect 00:12:01.594 ************************************ 00:12:01.594 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:01.594 * Looking for test storage... 00:12:01.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.594 --rc genhtml_branch_coverage=1 00:12:01.594 --rc genhtml_function_coverage=1 00:12:01.594 --rc genhtml_legend=1 00:12:01.594 --rc geninfo_all_blocks=1 00:12:01.594 --rc geninfo_unexecuted_blocks=1 00:12:01.594 00:12:01.594 ' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.594 --rc genhtml_branch_coverage=1 00:12:01.594 --rc genhtml_function_coverage=1 00:12:01.594 --rc genhtml_legend=1 00:12:01.594 --rc geninfo_all_blocks=1 00:12:01.594 --rc geninfo_unexecuted_blocks=1 00:12:01.594 00:12:01.594 ' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.594 --rc genhtml_branch_coverage=1 00:12:01.594 --rc genhtml_function_coverage=1 00:12:01.594 --rc genhtml_legend=1 00:12:01.594 --rc geninfo_all_blocks=1 00:12:01.594 --rc geninfo_unexecuted_blocks=1 00:12:01.594 00:12:01.594 ' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:01.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.594 --rc genhtml_branch_coverage=1 00:12:01.594 --rc genhtml_function_coverage=1 00:12:01.594 --rc genhtml_legend=1 00:12:01.594 --rc geninfo_all_blocks=1 00:12:01.594 --rc geninfo_unexecuted_blocks=1 00:12:01.594 00:12:01.594 ' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.594 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.595 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:08.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:08.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:08.174 Found net devices under 0000:86:00.0: cvl_0_0 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:08.174 Found net devices under 0000:86:00.1: cvl_0_1 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:08.174 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.175 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.175 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.175 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.175 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:08.175 19:19:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:08.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:12:08.175 00:12:08.175 --- 10.0.0.2 ping statistics --- 00:12:08.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.175 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:08.175 00:12:08.175 --- 10.0.0.1 ping statistics --- 00:12:08.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.175 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=2024132 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 2024132 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2024132 ']' 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:08.175 19:19:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.175 [2024-10-17 19:19:31.206794] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:12:08.175 [2024-10-17 19:19:31.206841] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.175 [2024-10-17 19:19:31.288531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.175 [2024-10-17 19:19:31.331789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.175 [2024-10-17 19:19:31.331826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.175 [2024-10-17 19:19:31.331833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.175 [2024-10-17 19:19:31.331839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.175 [2024-10-17 19:19:31.331843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.175 [2024-10-17 19:19:31.333303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.175 [2024-10-17 19:19:31.333325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.175 [2024-10-17 19:19:31.333409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.175 [2024-10-17 19:19:31.333411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.434 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 [2024-10-17 19:19:32.080041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:08.435 [2024-10-17 19:19:32.152290] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:08.435 19:19:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:11.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.883 rmmod nvme_tcp 00:12:24.883 rmmod nvme_fabrics 00:12:24.883 rmmod nvme_keyring 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 2024132 ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2024132 ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2024132' 00:12:24.883 killing process with pid 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2024132 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.883 19:19:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:27.420 00:12:27.420 real 0m25.686s 00:12:27.420 user 1m10.316s 00:12:27.420 sys 0m5.907s 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.420 ************************************ 00:12:27.420 END TEST nvmf_connect_disconnect 00:12:27.420 ************************************ 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.420 ************************************ 00:12:27.420 START TEST nvmf_multitarget 00:12:27.420 ************************************ 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:27.420 * Looking for test storage... 00:12:27.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:27.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.420 --rc genhtml_branch_coverage=1 00:12:27.420 --rc genhtml_function_coverage=1 00:12:27.420 --rc genhtml_legend=1 00:12:27.420 --rc geninfo_all_blocks=1 00:12:27.420 --rc geninfo_unexecuted_blocks=1 00:12:27.420 00:12:27.420 ' 00:12:27.420 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.421 --rc genhtml_branch_coverage=1 00:12:27.421 --rc genhtml_function_coverage=1 00:12:27.421 --rc genhtml_legend=1 00:12:27.421 --rc geninfo_all_blocks=1 00:12:27.421 --rc geninfo_unexecuted_blocks=1 00:12:27.421 00:12:27.421 ' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.421 19:19:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.992 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.992 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.992 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.992 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.993 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:33.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:12:33.993 00:12:33.993 --- 10.0.0.2 ping statistics --- 00:12:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.993 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:12:33.993 00:12:33.993 --- 10.0.0.1 ping statistics --- 00:12:33.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.993 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=2030526 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 2030526 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2030526 ']' 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:33.993 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:33.993 [2024-10-17 19:19:57.005414] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:12:33.993 [2024-10-17 19:19:57.005458] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.993 [2024-10-17 19:19:57.084442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.993 [2024-10-17 19:19:57.126312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.993 [2024-10-17 19:19:57.126346] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.993 [2024-10-17 19:19:57.126354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.993 [2024-10-17 19:19:57.126360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.993 [2024-10-17 19:19:57.126365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.993 [2024-10-17 19:19:57.127926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.993 [2024-10-17 19:19:57.128037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.993 [2024-10-17 19:19:57.128140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.993 [2024-10-17 19:19:57.128142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.252 19:19:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:34.252 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:34.252 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:34.511 "nvmf_tgt_1" 00:12:34.511 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:34.511 "nvmf_tgt_2" 00:12:34.511 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.511 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:34.769 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:34.770 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:34.770 true 00:12:34.770 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:34.770 true 00:12:34.770 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:34.770 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.028 rmmod nvme_tcp 00:12:35.028 rmmod nvme_fabrics 00:12:35.028 rmmod nvme_keyring 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 2030526 ']' 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 2030526 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2030526 ']' 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2030526 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:35.028 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2030526 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2030526' 00:12:35.029 killing process with pid 2030526 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2030526 00:12:35.029 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2030526 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.287 19:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.822 00:12:37.822 real 0m10.283s 00:12:37.822 user 0m9.951s 00:12:37.822 sys 0m4.915s 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.822 ************************************ 00:12:37.822 END TEST nvmf_multitarget 00:12:37.822 ************************************ 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.822 ************************************ 00:12:37.822 START TEST nvmf_rpc 00:12:37.822 ************************************ 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.822 * Looking for test storage... 00:12:37.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.822 --rc genhtml_branch_coverage=1 00:12:37.822 --rc genhtml_function_coverage=1 00:12:37.822 --rc genhtml_legend=1 00:12:37.822 --rc geninfo_all_blocks=1 00:12:37.822 --rc geninfo_unexecuted_blocks=1 00:12:37.822 00:12:37.822 ' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.822 --rc genhtml_branch_coverage=1 00:12:37.822 --rc genhtml_function_coverage=1 00:12:37.822 --rc genhtml_legend=1 00:12:37.822 --rc geninfo_all_blocks=1 00:12:37.822 --rc geninfo_unexecuted_blocks=1 00:12:37.822 00:12:37.822 ' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.822 --rc genhtml_branch_coverage=1 00:12:37.822 --rc genhtml_function_coverage=1 00:12:37.822 --rc genhtml_legend=1 00:12:37.822 --rc geninfo_all_blocks=1 00:12:37.822 --rc geninfo_unexecuted_blocks=1 00:12:37.822 00:12:37.822 ' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:37.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.822 --rc genhtml_branch_coverage=1 00:12:37.822 --rc genhtml_function_coverage=1 00:12:37.822 --rc genhtml_legend=1 00:12:37.822 --rc geninfo_all_blocks=1 00:12:37.822 --rc geninfo_unexecuted_blocks=1 00:12:37.822 00:12:37.822 ' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.822 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:37.823 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.394 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:44.395 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:44.395 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:44.395 Found net devices under 0000:86:00.0: cvl_0_0 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:44.395 Found net devices under 0000:86:00.1: cvl_0_1 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:44.395 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:44.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:12:44.395 00:12:44.395 --- 10.0.0.2 ping statistics --- 00:12:44.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.395 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:12:44.395 00:12:44.395 --- 10.0.0.1 ping statistics --- 00:12:44.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.395 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:44.395 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=2034321 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 2034321 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2034321 ']' 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:44.396 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.396 [2024-10-17 19:20:07.347503] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:12:44.396 [2024-10-17 19:20:07.347551] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.396 [2024-10-17 19:20:07.428323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.396 [2024-10-17 19:20:07.470197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.396 [2024-10-17 19:20:07.470234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.396 [2024-10-17 19:20:07.470241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.396 [2024-10-17 19:20:07.470247] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.396 [2024-10-17 19:20:07.470253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.396 [2024-10-17 19:20:07.471810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.396 [2024-10-17 19:20:07.471918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.396 [2024-10-17 19:20:07.472028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.396 [2024-10-17 19:20:07.472029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:44.655 "tick_rate": 2100000000, 00:12:44.655 "poll_groups": [ 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_000", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_001", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_002", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_003", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [] 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 }' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.655 [2024-10-17 19:20:08.334210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:44.655 "tick_rate": 2100000000, 00:12:44.655 "poll_groups": [ 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_000", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [ 00:12:44.655 { 00:12:44.655 "trtype": "TCP" 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_001", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [ 00:12:44.655 { 00:12:44.655 "trtype": "TCP" 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_002", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [ 00:12:44.655 { 00:12:44.655 "trtype": "TCP" 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 }, 00:12:44.655 { 00:12:44.655 "name": "nvmf_tgt_poll_group_003", 00:12:44.655 "admin_qpairs": 0, 00:12:44.655 "io_qpairs": 0, 00:12:44.655 "current_admin_qpairs": 0, 00:12:44.655 "current_io_qpairs": 0, 00:12:44.655 "pending_bdev_io": 0, 00:12:44.655 "completed_nvme_io": 0, 00:12:44.655 "transports": [ 00:12:44.655 { 00:12:44.655 "trtype": "TCP" 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 } 00:12:44.655 ] 00:12:44.655 }' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:44.655 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:44.656 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:44.656 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 Malloc1 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 [2024-10-17 19:20:08.503089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:44.915 [2024-10-17 19:20:08.531825] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:44.915 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.915 could not add new controller: failed to write to nvme-fabrics device 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.915 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.292 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.292 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.292 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.292 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.292 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.194 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.452 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:48.453 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.453 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:48.453 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:48.453 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.453 [2024-10-17 19:20:12.005512] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:48.453 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:48.453 could not add new controller: failed to write to nvme-fabrics device 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.453 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.388 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.389 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.389 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.389 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.389 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.456 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.716 [2024-10-17 19:20:15.425972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.716 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.095 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.095 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.095 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.095 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.095 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.000 [2024-10-17 19:20:18.770083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.000 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.259 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.259 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.196 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.196 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.196 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.196 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.196 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.730 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.731 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 [2024-10-17 19:20:22.275038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.731 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.665 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.665 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:59.665 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.665 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:59.665 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:02.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 [2024-10-17 19:20:25.589093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.198 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.199 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:02.199 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.199 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.199 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.199 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.136 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.136 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.136 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.136 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.136 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:05.039 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 [2024-10-17 19:20:29.007303] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.298 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.678 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.678 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:06.678 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.678 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:06.678 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:08.587 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:08.587 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:08.587 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 [2024-10-17 19:20:32.277702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 [2024-10-17 19:20:32.325742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.588 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 [2024-10-17 19:20:32.373875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 [2024-10-17 19:20:32.422028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 [2024-10-17 19:20:32.470205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.848 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:08.849 "tick_rate": 2100000000, 00:13:08.849 "poll_groups": [ 00:13:08.849 { 00:13:08.849 "name": "nvmf_tgt_poll_group_000", 00:13:08.849 "admin_qpairs": 2, 00:13:08.849 "io_qpairs": 168, 00:13:08.849 "current_admin_qpairs": 0, 00:13:08.849 "current_io_qpairs": 0, 00:13:08.849 "pending_bdev_io": 0, 00:13:08.849 "completed_nvme_io": 240, 00:13:08.849 "transports": [ 00:13:08.849 { 00:13:08.849 "trtype": "TCP" 00:13:08.849 } 00:13:08.849 ] 00:13:08.849 }, 00:13:08.849 { 00:13:08.849 "name": "nvmf_tgt_poll_group_001", 00:13:08.849 "admin_qpairs": 2, 00:13:08.849 "io_qpairs": 168, 00:13:08.849 "current_admin_qpairs": 0, 00:13:08.849 "current_io_qpairs": 0, 00:13:08.849 "pending_bdev_io": 0, 00:13:08.849 "completed_nvme_io": 252, 00:13:08.849 "transports": [ 00:13:08.849 { 00:13:08.849 "trtype": "TCP" 00:13:08.849 } 00:13:08.849 ] 00:13:08.849 }, 00:13:08.849 { 00:13:08.849 "name": "nvmf_tgt_poll_group_002", 00:13:08.849 "admin_qpairs": 1, 00:13:08.849 "io_qpairs": 168, 00:13:08.849 "current_admin_qpairs": 0, 00:13:08.849 "current_io_qpairs": 0, 00:13:08.849 "pending_bdev_io": 0, 00:13:08.849 "completed_nvme_io": 219, 00:13:08.849 "transports": [ 00:13:08.849 { 00:13:08.849 "trtype": "TCP" 00:13:08.849 } 00:13:08.849 ] 00:13:08.849 }, 00:13:08.849 { 00:13:08.849 "name": "nvmf_tgt_poll_group_003", 00:13:08.849 "admin_qpairs": 2, 00:13:08.849 "io_qpairs": 168, 00:13:08.849 "current_admin_qpairs": 0, 00:13:08.849 "current_io_qpairs": 0, 00:13:08.849 "pending_bdev_io": 0, 00:13:08.849 "completed_nvme_io": 311, 00:13:08.849 "transports": [ 00:13:08.849 { 00:13:08.849 "trtype": "TCP" 00:13:08.849 } 00:13:08.849 ] 00:13:08.849 } 00:13:08.849 ] 00:13:08.849 }' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:08.849 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:08.849 rmmod nvme_tcp 00:13:09.108 rmmod nvme_fabrics 00:13:09.108 rmmod nvme_keyring 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 2034321 ']' 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 2034321 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2034321 ']' 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2034321 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2034321 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2034321' 00:13:09.108 killing process with pid 2034321 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2034321 00:13:09.108 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2034321 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.368 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.276 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:11.276 00:13:11.276 real 0m33.909s 00:13:11.276 user 1m43.201s 00:13:11.276 sys 0m6.600s 00:13:11.276 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:11.276 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:11.276 ************************************ 00:13:11.276 END TEST nvmf_rpc 00:13:11.276 ************************************ 00:13:11.276 19:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.276 19:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:11.276 19:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:11.276 19:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:11.537 ************************************ 00:13:11.537 START TEST nvmf_invalid 00:13:11.537 ************************************ 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:11.537 * Looking for test storage... 00:13:11.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.537 --rc genhtml_branch_coverage=1 00:13:11.537 --rc genhtml_function_coverage=1 00:13:11.537 --rc genhtml_legend=1 00:13:11.537 --rc geninfo_all_blocks=1 00:13:11.537 --rc geninfo_unexecuted_blocks=1 00:13:11.537 00:13:11.537 ' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.537 --rc genhtml_branch_coverage=1 00:13:11.537 --rc genhtml_function_coverage=1 00:13:11.537 --rc genhtml_legend=1 00:13:11.537 --rc geninfo_all_blocks=1 00:13:11.537 --rc geninfo_unexecuted_blocks=1 00:13:11.537 00:13:11.537 ' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.537 --rc genhtml_branch_coverage=1 00:13:11.537 --rc genhtml_function_coverage=1 00:13:11.537 --rc genhtml_legend=1 00:13:11.537 --rc geninfo_all_blocks=1 00:13:11.537 --rc geninfo_unexecuted_blocks=1 00:13:11.537 00:13:11.537 ' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.537 --rc genhtml_branch_coverage=1 00:13:11.537 --rc genhtml_function_coverage=1 00:13:11.537 --rc genhtml_legend=1 00:13:11.537 --rc geninfo_all_blocks=1 00:13:11.537 --rc geninfo_unexecuted_blocks=1 00:13:11.537 00:13:11.537 ' 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.537 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:11.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:11.538 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:18.110 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.110 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:18.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:18.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:18.111 Found net devices under 0000:86:00.0: cvl_0_0 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:18.111 Found net devices under 0000:86:00.1: cvl_0_1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:18.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:13:18.111 00:13:18.111 --- 10.0.0.2 ping statistics --- 00:13:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.111 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:13:18.111 00:13:18.111 --- 10.0.0.1 ping statistics --- 00:13:18.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.111 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=2042159 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 2042159 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2042159 ']' 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.111 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.111 [2024-10-17 19:20:41.355907] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:13:18.112 [2024-10-17 19:20:41.355951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.112 [2024-10-17 19:20:41.435012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.112 [2024-10-17 19:20:41.477156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.112 [2024-10-17 19:20:41.477191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.112 [2024-10-17 19:20:41.477198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.112 [2024-10-17 19:20:41.477204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.112 [2024-10-17 19:20:41.477209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.112 [2024-10-17 19:20:41.478779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.112 [2024-10-17 19:20:41.478887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.112 [2024-10-17 19:20:41.478996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.112 [2024-10-17 19:20:41.478997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2515 00:13:18.680 [2024-10-17 19:20:42.409859] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:18.680 { 00:13:18.680 "nqn": "nqn.2016-06.io.spdk:cnode2515", 00:13:18.680 "tgt_name": "foobar", 00:13:18.680 "method": "nvmf_create_subsystem", 00:13:18.680 "req_id": 1 00:13:18.680 } 00:13:18.680 Got JSON-RPC error response 00:13:18.680 response: 00:13:18.680 { 00:13:18.680 "code": -32603, 00:13:18.680 "message": "Unable to find target foobar" 00:13:18.680 }' 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:18.680 { 00:13:18.680 "nqn": "nqn.2016-06.io.spdk:cnode2515", 00:13:18.680 "tgt_name": "foobar", 00:13:18.680 "method": "nvmf_create_subsystem", 00:13:18.680 "req_id": 1 00:13:18.680 } 00:13:18.680 Got JSON-RPC error response 00:13:18.680 response: 00:13:18.680 { 00:13:18.680 "code": -32603, 00:13:18.680 "message": "Unable to find target foobar" 00:13:18.680 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:18.680 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14175 00:13:18.939 [2024-10-17 19:20:42.614619] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14175: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:18.939 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:18.939 { 00:13:18.939 "nqn": "nqn.2016-06.io.spdk:cnode14175", 00:13:18.939 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.939 "method": "nvmf_create_subsystem", 00:13:18.939 "req_id": 1 00:13:18.939 } 00:13:18.939 Got JSON-RPC error response 00:13:18.939 response: 00:13:18.939 { 00:13:18.939 "code": -32602, 00:13:18.939 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.939 }' 00:13:18.939 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:18.939 { 00:13:18.939 "nqn": "nqn.2016-06.io.spdk:cnode14175", 00:13:18.939 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:18.939 "method": "nvmf_create_subsystem", 00:13:18.939 "req_id": 1 00:13:18.939 } 00:13:18.939 Got JSON-RPC error response 00:13:18.939 response: 00:13:18.939 { 00:13:18.939 "code": -32602, 00:13:18.939 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:18.939 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:18.939 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:18.939 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19770 00:13:19.199 [2024-10-17 19:20:42.819283] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19770: invalid model number 'SPDK_Controller' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:19.199 { 00:13:19.199 "nqn": "nqn.2016-06.io.spdk:cnode19770", 00:13:19.199 "model_number": "SPDK_Controller\u001f", 00:13:19.199 "method": "nvmf_create_subsystem", 00:13:19.199 "req_id": 1 00:13:19.199 } 00:13:19.199 Got JSON-RPC error response 00:13:19.199 response: 00:13:19.199 { 00:13:19.199 "code": -32602, 00:13:19.199 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.199 }' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:19.199 { 00:13:19.199 "nqn": "nqn.2016-06.io.spdk:cnode19770", 00:13:19.199 "model_number": "SPDK_Controller\u001f", 00:13:19.199 "method": "nvmf_create_subsystem", 00:13:19.199 "req_id": 1 00:13:19.199 } 00:13:19.199 Got JSON-RPC error response 00:13:19.199 response: 00:13:19.199 { 00:13:19.199 "code": -32602, 00:13:19.199 "message": "Invalid MN SPDK_Controller\u001f" 00:13:19.199 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.199 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.200 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:13:19.459 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n#$iJOL3S7)*%&Ub;MK2S' 00:13:19.459 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'n#$iJOL3S7)*%&Ub;MK2S' nqn.2016-06.io.spdk:cnode16597 00:13:19.459 [2024-10-17 19:20:43.152409] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16597: invalid serial number 'n#$iJOL3S7)*%&Ub;MK2S' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:19.459 { 00:13:19.459 "nqn": "nqn.2016-06.io.spdk:cnode16597", 00:13:19.459 "serial_number": "n#$iJOL3S7)*%&Ub;MK2S", 00:13:19.459 "method": "nvmf_create_subsystem", 00:13:19.459 "req_id": 1 00:13:19.459 } 00:13:19.459 Got JSON-RPC error response 00:13:19.459 response: 00:13:19.459 { 00:13:19.459 "code": -32602, 00:13:19.459 "message": "Invalid SN n#$iJOL3S7)*%&Ub;MK2S" 00:13:19.459 }' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:19.459 { 00:13:19.459 "nqn": "nqn.2016-06.io.spdk:cnode16597", 00:13:19.459 "serial_number": "n#$iJOL3S7)*%&Ub;MK2S", 00:13:19.459 "method": "nvmf_create_subsystem", 00:13:19.459 "req_id": 1 00:13:19.459 } 00:13:19.459 Got JSON-RPC error response 00:13:19.459 response: 00:13:19.459 { 00:13:19.459 "code": -32602, 00:13:19.459 "message": "Invalid SN n#$iJOL3S7)*%&Ub;MK2S" 00:13:19.459 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:19.459 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:19.719 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.720 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ t == \- ]] 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'teH' 00:13:19.721 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'teH' nqn.2016-06.io.spdk:cnode17456 00:13:19.980 [2024-10-17 19:20:43.625932] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17456: invalid model number 'teH' 00:13:19.980 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:19.980 { 00:13:19.980 "nqn": "nqn.2016-06.io.spdk:cnode17456", 00:13:19.980 "model_number": "teH", 00:13:19.980 "method": "nvmf_create_subsystem", 00:13:19.980 "req_id": 1 00:13:19.980 } 00:13:19.980 Got JSON-RPC error response 00:13:19.980 response: 00:13:19.980 { 00:13:19.980 "code": -32602, 00:13:19.980 "message": "Invalid MN teH" 00:13:19.980 }' 00:13:19.980 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:19.980 { 00:13:19.980 "nqn": "nqn.2016-06.io.spdk:cnode17456", 00:13:19.980 "model_number": "teH", 00:13:19.980 "method": "nvmf_create_subsystem", 00:13:19.980 "req_id": 1 00:13:19.980 } 00:13:19.980 Got JSON-RPC error response 00:13:19.980 response: 00:13:19.980 { 00:13:19.980 "code": -32602, 00:13:19.980 "message": "Invalid MN teH" 00:13:19.980 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:19.980 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:20.239 [2024-10-17 19:20:43.818648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:20.239 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:20.497 [2024-10-17 19:20:44.223969] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:20.497 { 00:13:20.497 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.497 "listen_address": { 00:13:20.497 "trtype": "tcp", 00:13:20.497 "traddr": "", 00:13:20.497 "trsvcid": "4421" 00:13:20.497 }, 00:13:20.497 "method": "nvmf_subsystem_remove_listener", 00:13:20.497 "req_id": 1 00:13:20.497 } 00:13:20.497 Got JSON-RPC error response 00:13:20.497 response: 00:13:20.497 { 00:13:20.497 "code": -32602, 00:13:20.497 "message": "Invalid parameters" 00:13:20.497 }' 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:20.497 { 00:13:20.497 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:20.497 "listen_address": { 00:13:20.497 "trtype": "tcp", 00:13:20.497 "traddr": "", 00:13:20.497 "trsvcid": "4421" 00:13:20.497 }, 00:13:20.497 "method": "nvmf_subsystem_remove_listener", 00:13:20.497 "req_id": 1 00:13:20.497 } 00:13:20.497 Got JSON-RPC error response 00:13:20.497 response: 00:13:20.497 { 00:13:20.497 "code": -32602, 00:13:20.497 "message": "Invalid parameters" 00:13:20.497 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:20.497 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9994 -i 0 00:13:20.755 [2024-10-17 19:20:44.416541] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9994: invalid cntlid range [0-65519] 00:13:20.755 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:20.755 { 00:13:20.755 "nqn": "nqn.2016-06.io.spdk:cnode9994", 00:13:20.755 "min_cntlid": 0, 00:13:20.755 "method": "nvmf_create_subsystem", 00:13:20.755 "req_id": 1 00:13:20.755 } 00:13:20.755 Got JSON-RPC error response 00:13:20.755 response: 00:13:20.755 { 00:13:20.755 "code": -32602, 00:13:20.755 "message": "Invalid cntlid range [0-65519]" 00:13:20.755 }' 00:13:20.755 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:20.755 { 00:13:20.755 "nqn": "nqn.2016-06.io.spdk:cnode9994", 00:13:20.755 "min_cntlid": 0, 00:13:20.755 "method": "nvmf_create_subsystem", 00:13:20.755 "req_id": 1 00:13:20.755 } 00:13:20.755 Got JSON-RPC error response 00:13:20.755 response: 00:13:20.755 { 00:13:20.755 "code": -32602, 00:13:20.755 "message": "Invalid cntlid range [0-65519]" 00:13:20.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:20.755 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9376 -i 65520 00:13:21.014 [2024-10-17 19:20:44.617227] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9376: invalid cntlid range [65520-65519] 00:13:21.014 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:21.014 { 00:13:21.014 "nqn": "nqn.2016-06.io.spdk:cnode9376", 00:13:21.014 "min_cntlid": 65520, 00:13:21.014 "method": "nvmf_create_subsystem", 00:13:21.014 "req_id": 1 00:13:21.014 } 00:13:21.014 Got JSON-RPC error response 00:13:21.014 response: 00:13:21.014 { 00:13:21.014 "code": -32602, 00:13:21.014 "message": "Invalid cntlid range [65520-65519]" 00:13:21.014 }' 00:13:21.014 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:21.014 { 00:13:21.014 "nqn": "nqn.2016-06.io.spdk:cnode9376", 00:13:21.014 "min_cntlid": 65520, 00:13:21.014 "method": "nvmf_create_subsystem", 00:13:21.014 "req_id": 1 00:13:21.014 } 00:13:21.014 Got JSON-RPC error response 00:13:21.014 response: 00:13:21.014 { 00:13:21.014 "code": -32602, 00:13:21.014 "message": "Invalid cntlid range [65520-65519]" 00:13:21.014 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.014 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25319 -I 0 00:13:21.274 [2024-10-17 19:20:44.837954] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25319: invalid cntlid range [1-0] 00:13:21.274 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:21.274 { 00:13:21.274 "nqn": "nqn.2016-06.io.spdk:cnode25319", 00:13:21.274 "max_cntlid": 0, 00:13:21.274 "method": "nvmf_create_subsystem", 00:13:21.274 "req_id": 1 00:13:21.274 } 00:13:21.274 Got JSON-RPC error response 00:13:21.274 response: 00:13:21.274 { 00:13:21.274 "code": -32602, 00:13:21.274 "message": "Invalid cntlid range [1-0]" 00:13:21.274 }' 00:13:21.274 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:21.274 { 00:13:21.274 "nqn": "nqn.2016-06.io.spdk:cnode25319", 00:13:21.274 "max_cntlid": 0, 00:13:21.274 "method": "nvmf_create_subsystem", 00:13:21.274 "req_id": 1 00:13:21.274 } 00:13:21.274 Got JSON-RPC error response 00:13:21.274 response: 00:13:21.274 { 00:13:21.274 "code": -32602, 00:13:21.274 "message": "Invalid cntlid range [1-0]" 00:13:21.274 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.274 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21418 -I 65520 00:13:21.274 [2024-10-17 19:20:45.046658] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21418: invalid cntlid range [1-65520] 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:21.533 { 00:13:21.533 "nqn": "nqn.2016-06.io.spdk:cnode21418", 00:13:21.533 "max_cntlid": 65520, 00:13:21.533 "method": "nvmf_create_subsystem", 00:13:21.533 "req_id": 1 00:13:21.533 } 00:13:21.533 Got JSON-RPC error response 00:13:21.533 response: 00:13:21.533 { 00:13:21.533 "code": -32602, 00:13:21.533 "message": "Invalid cntlid range [1-65520]" 00:13:21.533 }' 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:21.533 { 00:13:21.533 "nqn": "nqn.2016-06.io.spdk:cnode21418", 00:13:21.533 "max_cntlid": 65520, 00:13:21.533 "method": "nvmf_create_subsystem", 00:13:21.533 "req_id": 1 00:13:21.533 } 00:13:21.533 Got JSON-RPC error response 00:13:21.533 response: 00:13:21.533 { 00:13:21.533 "code": -32602, 00:13:21.533 "message": "Invalid cntlid range [1-65520]" 00:13:21.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7355 -i 6 -I 5 00:13:21.533 [2024-10-17 19:20:45.243308] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7355: invalid cntlid range [6-5] 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:21.533 { 00:13:21.533 "nqn": "nqn.2016-06.io.spdk:cnode7355", 00:13:21.533 "min_cntlid": 6, 00:13:21.533 "max_cntlid": 5, 00:13:21.533 "method": "nvmf_create_subsystem", 00:13:21.533 "req_id": 1 00:13:21.533 } 00:13:21.533 Got JSON-RPC error response 00:13:21.533 response: 00:13:21.533 { 00:13:21.533 "code": -32602, 00:13:21.533 "message": "Invalid cntlid range [6-5]" 00:13:21.533 }' 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:21.533 { 00:13:21.533 "nqn": "nqn.2016-06.io.spdk:cnode7355", 00:13:21.533 "min_cntlid": 6, 00:13:21.533 "max_cntlid": 5, 00:13:21.533 "method": "nvmf_create_subsystem", 00:13:21.533 "req_id": 1 00:13:21.533 } 00:13:21.533 Got JSON-RPC error response 00:13:21.533 response: 00:13:21.533 { 00:13:21.533 "code": -32602, 00:13:21.533 "message": "Invalid cntlid range [6-5]" 00:13:21.533 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:21.533 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:21.792 { 00:13:21.792 "name": "foobar", 00:13:21.792 "method": "nvmf_delete_target", 00:13:21.792 "req_id": 1 00:13:21.792 } 00:13:21.792 Got JSON-RPC error response 00:13:21.792 response: 00:13:21.792 { 00:13:21.792 "code": -32602, 00:13:21.792 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:21.792 }' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:21.792 { 00:13:21.792 "name": "foobar", 00:13:21.792 "method": "nvmf_delete_target", 00:13:21.792 "req_id": 1 00:13:21.792 } 00:13:21.792 Got JSON-RPC error response 00:13:21.792 response: 00:13:21.792 { 00:13:21.792 "code": -32602, 00:13:21.792 "message": "The specified target doesn't exist, cannot delete it." 00:13:21.792 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:21.792 rmmod nvme_tcp 00:13:21.792 rmmod nvme_fabrics 00:13:21.792 rmmod nvme_keyring 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 2042159 ']' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 2042159 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2042159 ']' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2042159 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2042159 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2042159' 00:13:21.792 killing process with pid 2042159 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2042159 00:13:21.792 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2042159 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:22.051 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.958 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:23.958 00:13:23.958 real 0m12.676s 00:13:23.958 user 0m21.084s 00:13:23.958 sys 0m5.498s 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 ************************************ 00:13:24.218 END TEST nvmf_invalid 00:13:24.218 ************************************ 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.218 ************************************ 00:13:24.218 START TEST nvmf_connect_stress 00:13:24.218 ************************************ 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:24.218 * Looking for test storage... 00:13:24.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:24.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.218 --rc genhtml_branch_coverage=1 00:13:24.218 --rc genhtml_function_coverage=1 00:13:24.218 --rc genhtml_legend=1 00:13:24.218 --rc geninfo_all_blocks=1 00:13:24.218 --rc geninfo_unexecuted_blocks=1 00:13:24.218 00:13:24.218 ' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:24.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.218 --rc genhtml_branch_coverage=1 00:13:24.218 --rc genhtml_function_coverage=1 00:13:24.218 --rc genhtml_legend=1 00:13:24.218 --rc geninfo_all_blocks=1 00:13:24.218 --rc geninfo_unexecuted_blocks=1 00:13:24.218 00:13:24.218 ' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:24.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.218 --rc genhtml_branch_coverage=1 00:13:24.218 --rc genhtml_function_coverage=1 00:13:24.218 --rc genhtml_legend=1 00:13:24.218 --rc geninfo_all_blocks=1 00:13:24.218 --rc geninfo_unexecuted_blocks=1 00:13:24.218 00:13:24.218 ' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:24.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.218 --rc genhtml_branch_coverage=1 00:13:24.218 --rc genhtml_function_coverage=1 00:13:24.218 --rc genhtml_legend=1 00:13:24.218 --rc geninfo_all_blocks=1 00:13:24.218 --rc geninfo_unexecuted_blocks=1 00:13:24.218 00:13:24.218 ' 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.218 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.478 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:31.046 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:31.047 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:31.047 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:31.047 Found net devices under 0000:86:00.0: cvl_0_0 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:31.047 Found net devices under 0000:86:00.1: cvl_0_1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:31.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.503 ms 00:13:31.047 00:13:31.047 --- 10.0.0.2 ping statistics --- 00:13:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.047 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:13:31.047 00:13:31.047 --- 10.0.0.1 ping statistics --- 00:13:31.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.047 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:31.047 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=2046546 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 2046546 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2046546 ']' 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.047 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.047 [2024-10-17 19:20:54.060248] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:13:31.047 [2024-10-17 19:20:54.060293] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.047 [2024-10-17 19:20:54.137082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.047 [2024-10-17 19:20:54.178274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.047 [2024-10-17 19:20:54.178309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.047 [2024-10-17 19:20:54.178316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.047 [2024-10-17 19:20:54.178322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.047 [2024-10-17 19:20:54.178327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.047 [2024-10-17 19:20:54.179781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.047 [2024-10-17 19:20:54.179879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.047 [2024-10-17 19:20:54.179880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 [2024-10-17 19:20:54.315640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 [2024-10-17 19:20:54.335851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 NULL1 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2046579 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.048 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.307 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:31.307 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.307 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.307 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.875 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.875 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:31.875 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.875 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.875 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.134 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.134 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:32.134 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.134 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.134 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.393 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.393 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:32.393 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.393 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.393 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.652 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.652 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:32.652 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.652 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.652 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.219 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.219 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:33.219 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.219 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.219 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.478 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.478 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:33.478 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.478 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.478 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.736 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.736 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:33.736 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.736 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.736 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.995 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.995 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:33.995 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.995 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.995 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.253 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.253 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:34.253 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.253 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.253 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.821 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.821 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:34.821 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.821 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.821 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.079 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.079 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:35.079 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.079 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.079 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.338 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.338 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:35.338 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.338 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.338 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.597 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.597 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:35.597 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.597 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.597 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.856 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.856 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:35.856 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.856 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.856 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.422 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.422 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:36.422 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.422 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.422 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.681 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.681 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:36.681 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.681 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.681 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.940 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.940 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:36.940 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.940 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.940 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.200 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.200 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:37.200 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.200 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.200 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.768 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.768 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:37.768 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.768 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.768 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.026 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.026 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:38.026 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.026 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.026 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.285 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.285 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:38.285 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.285 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.285 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.543 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.543 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:38.543 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.543 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.543 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.802 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.802 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:38.802 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.802 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.802 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.369 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.369 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:39.369 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.369 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.369 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.628 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:39.628 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.628 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.628 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.887 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.887 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:39.887 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.887 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.887 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.146 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.146 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:40.146 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.146 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.146 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.713 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.713 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:40.713 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.713 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.713 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.972 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2046579 00:13:40.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2046579) - No such process 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2046579 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:40.972 rmmod nvme_tcp 00:13:40.972 rmmod nvme_fabrics 00:13:40.972 rmmod nvme_keyring 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 2046546 ']' 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 2046546 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2046546 ']' 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2046546 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2046546 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2046546' 00:13:40.972 killing process with pid 2046546 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2046546 00:13:40.972 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2046546 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.232 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:43.328 00:13:43.328 real 0m19.072s 00:13:43.328 user 0m39.313s 00:13:43.328 sys 0m8.653s 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.328 ************************************ 00:13:43.328 END TEST nvmf_connect_stress 00:13:43.328 ************************************ 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.328 ************************************ 00:13:43.328 START TEST nvmf_fused_ordering 00:13:43.328 ************************************ 00:13:43.328 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:43.328 * Looking for test storage... 00:13:43.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.328 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:43.328 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:43.328 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:43.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.588 --rc genhtml_branch_coverage=1 00:13:43.588 --rc genhtml_function_coverage=1 00:13:43.588 --rc genhtml_legend=1 00:13:43.588 --rc geninfo_all_blocks=1 00:13:43.588 --rc geninfo_unexecuted_blocks=1 00:13:43.588 00:13:43.588 ' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:43.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.588 --rc genhtml_branch_coverage=1 00:13:43.588 --rc genhtml_function_coverage=1 00:13:43.588 --rc genhtml_legend=1 00:13:43.588 --rc geninfo_all_blocks=1 00:13:43.588 --rc geninfo_unexecuted_blocks=1 00:13:43.588 00:13:43.588 ' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:43.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.588 --rc genhtml_branch_coverage=1 00:13:43.588 --rc genhtml_function_coverage=1 00:13:43.588 --rc genhtml_legend=1 00:13:43.588 --rc geninfo_all_blocks=1 00:13:43.588 --rc geninfo_unexecuted_blocks=1 00:13:43.588 00:13:43.588 ' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:43.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:43.588 --rc genhtml_branch_coverage=1 00:13:43.588 --rc genhtml_function_coverage=1 00:13:43.588 --rc genhtml_legend=1 00:13:43.588 --rc geninfo_all_blocks=1 00:13:43.588 --rc geninfo_unexecuted_blocks=1 00:13:43.588 00:13:43.588 ' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:43.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.588 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.589 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.589 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:43.589 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:43.589 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:43.589 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:50.157 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.157 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:50.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:50.158 Found net devices under 0000:86:00.0: cvl_0_0 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:50.158 Found net devices under 0000:86:00.1: cvl_0_1 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:50.158 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:50.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:13:50.158 00:13:50.158 --- 10.0.0.2 ping statistics --- 00:13:50.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.158 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:50.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:13:50.158 00:13:50.158 --- 10.0.0.1 ping statistics --- 00:13:50.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.158 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.158 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=2052462 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 2052462 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2052462 ']' 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 [2024-10-17 19:21:13.231197] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:13:50.159 [2024-10-17 19:21:13.231245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.159 [2024-10-17 19:21:13.311187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.159 [2024-10-17 19:21:13.351576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:50.159 [2024-10-17 19:21:13.351614] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:50.159 [2024-10-17 19:21:13.351621] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:50.159 [2024-10-17 19:21:13.351626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:50.159 [2024-10-17 19:21:13.351632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:50.159 [2024-10-17 19:21:13.352187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 [2024-10-17 19:21:13.486692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 [2024-10-17 19:21:13.506894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 NULL1 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.159 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:50.159 [2024-10-17 19:21:13.561328] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:13:50.159 [2024-10-17 19:21:13.561359] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052481 ] 00:13:50.159 Attached to nqn.2016-06.io.spdk:cnode1 00:13:50.159 Namespace ID: 1 size: 1GB 00:13:50.159 fused_ordering(0) 00:13:50.159 fused_ordering(1) 00:13:50.159 fused_ordering(2) 00:13:50.159 fused_ordering(3) 00:13:50.159 fused_ordering(4) 00:13:50.159 fused_ordering(5) 00:13:50.159 fused_ordering(6) 00:13:50.159 fused_ordering(7) 00:13:50.159 fused_ordering(8) 00:13:50.159 fused_ordering(9) 00:13:50.159 fused_ordering(10) 00:13:50.159 fused_ordering(11) 00:13:50.159 fused_ordering(12) 00:13:50.159 fused_ordering(13) 00:13:50.159 fused_ordering(14) 00:13:50.159 fused_ordering(15) 00:13:50.159 fused_ordering(16) 00:13:50.159 fused_ordering(17) 00:13:50.159 fused_ordering(18) 00:13:50.159 fused_ordering(19) 00:13:50.159 fused_ordering(20) 00:13:50.159 fused_ordering(21) 00:13:50.159 fused_ordering(22) 00:13:50.159 fused_ordering(23) 00:13:50.159 fused_ordering(24) 00:13:50.159 fused_ordering(25) 00:13:50.159 fused_ordering(26) 00:13:50.159 fused_ordering(27) 00:13:50.159 fused_ordering(28) 00:13:50.159 fused_ordering(29) 00:13:50.159 fused_ordering(30) 00:13:50.159 fused_ordering(31) 00:13:50.159 fused_ordering(32) 00:13:50.159 fused_ordering(33) 00:13:50.159 fused_ordering(34) 00:13:50.159 fused_ordering(35) 00:13:50.159 fused_ordering(36) 00:13:50.159 fused_ordering(37) 00:13:50.159 fused_ordering(38) 00:13:50.159 fused_ordering(39) 00:13:50.159 fused_ordering(40) 00:13:50.159 fused_ordering(41) 00:13:50.159 fused_ordering(42) 00:13:50.159 fused_ordering(43) 00:13:50.159 fused_ordering(44) 00:13:50.159 fused_ordering(45) 00:13:50.159 fused_ordering(46) 00:13:50.159 fused_ordering(47) 00:13:50.159 fused_ordering(48) 00:13:50.159 fused_ordering(49) 00:13:50.159 fused_ordering(50) 00:13:50.159 fused_ordering(51) 00:13:50.159 fused_ordering(52) 00:13:50.159 fused_ordering(53) 00:13:50.159 fused_ordering(54) 00:13:50.159 fused_ordering(55) 00:13:50.159 fused_ordering(56) 00:13:50.159 fused_ordering(57) 00:13:50.159 fused_ordering(58) 00:13:50.160 fused_ordering(59) 00:13:50.160 fused_ordering(60) 00:13:50.160 fused_ordering(61) 00:13:50.160 fused_ordering(62) 00:13:50.160 fused_ordering(63) 00:13:50.160 fused_ordering(64) 00:13:50.160 fused_ordering(65) 00:13:50.160 fused_ordering(66) 00:13:50.160 fused_ordering(67) 00:13:50.160 fused_ordering(68) 00:13:50.160 fused_ordering(69) 00:13:50.160 fused_ordering(70) 00:13:50.160 fused_ordering(71) 00:13:50.160 fused_ordering(72) 00:13:50.160 fused_ordering(73) 00:13:50.160 fused_ordering(74) 00:13:50.160 fused_ordering(75) 00:13:50.160 fused_ordering(76) 00:13:50.160 fused_ordering(77) 00:13:50.160 fused_ordering(78) 00:13:50.160 fused_ordering(79) 00:13:50.160 fused_ordering(80) 00:13:50.160 fused_ordering(81) 00:13:50.160 fused_ordering(82) 00:13:50.160 fused_ordering(83) 00:13:50.160 fused_ordering(84) 00:13:50.160 fused_ordering(85) 00:13:50.160 fused_ordering(86) 00:13:50.160 fused_ordering(87) 00:13:50.160 fused_ordering(88) 00:13:50.160 fused_ordering(89) 00:13:50.160 fused_ordering(90) 00:13:50.160 fused_ordering(91) 00:13:50.160 fused_ordering(92) 00:13:50.160 fused_ordering(93) 00:13:50.160 fused_ordering(94) 00:13:50.160 fused_ordering(95) 00:13:50.160 fused_ordering(96) 00:13:50.160 fused_ordering(97) 00:13:50.160 fused_ordering(98) 00:13:50.160 fused_ordering(99) 00:13:50.160 fused_ordering(100) 00:13:50.160 fused_ordering(101) 00:13:50.160 fused_ordering(102) 00:13:50.160 fused_ordering(103) 00:13:50.160 fused_ordering(104) 00:13:50.160 fused_ordering(105) 00:13:50.160 fused_ordering(106) 00:13:50.160 fused_ordering(107) 00:13:50.160 fused_ordering(108) 00:13:50.160 fused_ordering(109) 00:13:50.160 fused_ordering(110) 00:13:50.160 fused_ordering(111) 00:13:50.160 fused_ordering(112) 00:13:50.160 fused_ordering(113) 00:13:50.160 fused_ordering(114) 00:13:50.160 fused_ordering(115) 00:13:50.160 fused_ordering(116) 00:13:50.160 fused_ordering(117) 00:13:50.160 fused_ordering(118) 00:13:50.160 fused_ordering(119) 00:13:50.160 fused_ordering(120) 00:13:50.160 fused_ordering(121) 00:13:50.160 fused_ordering(122) 00:13:50.160 fused_ordering(123) 00:13:50.160 fused_ordering(124) 00:13:50.160 fused_ordering(125) 00:13:50.160 fused_ordering(126) 00:13:50.160 fused_ordering(127) 00:13:50.160 fused_ordering(128) 00:13:50.160 fused_ordering(129) 00:13:50.160 fused_ordering(130) 00:13:50.160 fused_ordering(131) 00:13:50.160 fused_ordering(132) 00:13:50.160 fused_ordering(133) 00:13:50.160 fused_ordering(134) 00:13:50.160 fused_ordering(135) 00:13:50.160 fused_ordering(136) 00:13:50.160 fused_ordering(137) 00:13:50.160 fused_ordering(138) 00:13:50.160 fused_ordering(139) 00:13:50.160 fused_ordering(140) 00:13:50.160 fused_ordering(141) 00:13:50.160 fused_ordering(142) 00:13:50.160 fused_ordering(143) 00:13:50.160 fused_ordering(144) 00:13:50.160 fused_ordering(145) 00:13:50.160 fused_ordering(146) 00:13:50.160 fused_ordering(147) 00:13:50.160 fused_ordering(148) 00:13:50.160 fused_ordering(149) 00:13:50.160 fused_ordering(150) 00:13:50.160 fused_ordering(151) 00:13:50.160 fused_ordering(152) 00:13:50.160 fused_ordering(153) 00:13:50.160 fused_ordering(154) 00:13:50.160 fused_ordering(155) 00:13:50.160 fused_ordering(156) 00:13:50.160 fused_ordering(157) 00:13:50.160 fused_ordering(158) 00:13:50.160 fused_ordering(159) 00:13:50.160 fused_ordering(160) 00:13:50.160 fused_ordering(161) 00:13:50.160 fused_ordering(162) 00:13:50.160 fused_ordering(163) 00:13:50.160 fused_ordering(164) 00:13:50.160 fused_ordering(165) 00:13:50.160 fused_ordering(166) 00:13:50.160 fused_ordering(167) 00:13:50.160 fused_ordering(168) 00:13:50.160 fused_ordering(169) 00:13:50.160 fused_ordering(170) 00:13:50.160 fused_ordering(171) 00:13:50.160 fused_ordering(172) 00:13:50.160 fused_ordering(173) 00:13:50.160 fused_ordering(174) 00:13:50.160 fused_ordering(175) 00:13:50.160 fused_ordering(176) 00:13:50.160 fused_ordering(177) 00:13:50.160 fused_ordering(178) 00:13:50.160 fused_ordering(179) 00:13:50.160 fused_ordering(180) 00:13:50.160 fused_ordering(181) 00:13:50.160 fused_ordering(182) 00:13:50.160 fused_ordering(183) 00:13:50.160 fused_ordering(184) 00:13:50.160 fused_ordering(185) 00:13:50.160 fused_ordering(186) 00:13:50.160 fused_ordering(187) 00:13:50.160 fused_ordering(188) 00:13:50.160 fused_ordering(189) 00:13:50.160 fused_ordering(190) 00:13:50.160 fused_ordering(191) 00:13:50.160 fused_ordering(192) 00:13:50.160 fused_ordering(193) 00:13:50.160 fused_ordering(194) 00:13:50.160 fused_ordering(195) 00:13:50.160 fused_ordering(196) 00:13:50.160 fused_ordering(197) 00:13:50.160 fused_ordering(198) 00:13:50.160 fused_ordering(199) 00:13:50.160 fused_ordering(200) 00:13:50.160 fused_ordering(201) 00:13:50.160 fused_ordering(202) 00:13:50.160 fused_ordering(203) 00:13:50.160 fused_ordering(204) 00:13:50.160 fused_ordering(205) 00:13:50.419 fused_ordering(206) 00:13:50.419 fused_ordering(207) 00:13:50.419 fused_ordering(208) 00:13:50.419 fused_ordering(209) 00:13:50.419 fused_ordering(210) 00:13:50.419 fused_ordering(211) 00:13:50.419 fused_ordering(212) 00:13:50.419 fused_ordering(213) 00:13:50.419 fused_ordering(214) 00:13:50.419 fused_ordering(215) 00:13:50.419 fused_ordering(216) 00:13:50.419 fused_ordering(217) 00:13:50.419 fused_ordering(218) 00:13:50.419 fused_ordering(219) 00:13:50.419 fused_ordering(220) 00:13:50.419 fused_ordering(221) 00:13:50.419 fused_ordering(222) 00:13:50.419 fused_ordering(223) 00:13:50.419 fused_ordering(224) 00:13:50.419 fused_ordering(225) 00:13:50.419 fused_ordering(226) 00:13:50.419 fused_ordering(227) 00:13:50.419 fused_ordering(228) 00:13:50.419 fused_ordering(229) 00:13:50.419 fused_ordering(230) 00:13:50.419 fused_ordering(231) 00:13:50.419 fused_ordering(232) 00:13:50.419 fused_ordering(233) 00:13:50.419 fused_ordering(234) 00:13:50.419 fused_ordering(235) 00:13:50.419 fused_ordering(236) 00:13:50.419 fused_ordering(237) 00:13:50.419 fused_ordering(238) 00:13:50.419 fused_ordering(239) 00:13:50.419 fused_ordering(240) 00:13:50.419 fused_ordering(241) 00:13:50.419 fused_ordering(242) 00:13:50.419 fused_ordering(243) 00:13:50.419 fused_ordering(244) 00:13:50.419 fused_ordering(245) 00:13:50.419 fused_ordering(246) 00:13:50.419 fused_ordering(247) 00:13:50.419 fused_ordering(248) 00:13:50.419 fused_ordering(249) 00:13:50.419 fused_ordering(250) 00:13:50.419 fused_ordering(251) 00:13:50.419 fused_ordering(252) 00:13:50.419 fused_ordering(253) 00:13:50.419 fused_ordering(254) 00:13:50.419 fused_ordering(255) 00:13:50.419 fused_ordering(256) 00:13:50.419 fused_ordering(257) 00:13:50.419 fused_ordering(258) 00:13:50.419 fused_ordering(259) 00:13:50.419 fused_ordering(260) 00:13:50.419 fused_ordering(261) 00:13:50.419 fused_ordering(262) 00:13:50.419 fused_ordering(263) 00:13:50.419 fused_ordering(264) 00:13:50.419 fused_ordering(265) 00:13:50.419 fused_ordering(266) 00:13:50.419 fused_ordering(267) 00:13:50.419 fused_ordering(268) 00:13:50.419 fused_ordering(269) 00:13:50.419 fused_ordering(270) 00:13:50.419 fused_ordering(271) 00:13:50.419 fused_ordering(272) 00:13:50.419 fused_ordering(273) 00:13:50.419 fused_ordering(274) 00:13:50.419 fused_ordering(275) 00:13:50.419 fused_ordering(276) 00:13:50.419 fused_ordering(277) 00:13:50.419 fused_ordering(278) 00:13:50.419 fused_ordering(279) 00:13:50.419 fused_ordering(280) 00:13:50.419 fused_ordering(281) 00:13:50.419 fused_ordering(282) 00:13:50.419 fused_ordering(283) 00:13:50.419 fused_ordering(284) 00:13:50.419 fused_ordering(285) 00:13:50.419 fused_ordering(286) 00:13:50.419 fused_ordering(287) 00:13:50.419 fused_ordering(288) 00:13:50.419 fused_ordering(289) 00:13:50.419 fused_ordering(290) 00:13:50.419 fused_ordering(291) 00:13:50.419 fused_ordering(292) 00:13:50.419 fused_ordering(293) 00:13:50.419 fused_ordering(294) 00:13:50.419 fused_ordering(295) 00:13:50.419 fused_ordering(296) 00:13:50.419 fused_ordering(297) 00:13:50.419 fused_ordering(298) 00:13:50.419 fused_ordering(299) 00:13:50.419 fused_ordering(300) 00:13:50.419 fused_ordering(301) 00:13:50.419 fused_ordering(302) 00:13:50.419 fused_ordering(303) 00:13:50.419 fused_ordering(304) 00:13:50.419 fused_ordering(305) 00:13:50.419 fused_ordering(306) 00:13:50.419 fused_ordering(307) 00:13:50.419 fused_ordering(308) 00:13:50.419 fused_ordering(309) 00:13:50.419 fused_ordering(310) 00:13:50.419 fused_ordering(311) 00:13:50.419 fused_ordering(312) 00:13:50.419 fused_ordering(313) 00:13:50.419 fused_ordering(314) 00:13:50.419 fused_ordering(315) 00:13:50.419 fused_ordering(316) 00:13:50.419 fused_ordering(317) 00:13:50.419 fused_ordering(318) 00:13:50.419 fused_ordering(319) 00:13:50.419 fused_ordering(320) 00:13:50.419 fused_ordering(321) 00:13:50.419 fused_ordering(322) 00:13:50.419 fused_ordering(323) 00:13:50.419 fused_ordering(324) 00:13:50.419 fused_ordering(325) 00:13:50.419 fused_ordering(326) 00:13:50.419 fused_ordering(327) 00:13:50.419 fused_ordering(328) 00:13:50.419 fused_ordering(329) 00:13:50.419 fused_ordering(330) 00:13:50.419 fused_ordering(331) 00:13:50.419 fused_ordering(332) 00:13:50.419 fused_ordering(333) 00:13:50.419 fused_ordering(334) 00:13:50.419 fused_ordering(335) 00:13:50.419 fused_ordering(336) 00:13:50.419 fused_ordering(337) 00:13:50.419 fused_ordering(338) 00:13:50.419 fused_ordering(339) 00:13:50.419 fused_ordering(340) 00:13:50.419 fused_ordering(341) 00:13:50.419 fused_ordering(342) 00:13:50.419 fused_ordering(343) 00:13:50.419 fused_ordering(344) 00:13:50.419 fused_ordering(345) 00:13:50.419 fused_ordering(346) 00:13:50.419 fused_ordering(347) 00:13:50.419 fused_ordering(348) 00:13:50.419 fused_ordering(349) 00:13:50.419 fused_ordering(350) 00:13:50.419 fused_ordering(351) 00:13:50.419 fused_ordering(352) 00:13:50.419 fused_ordering(353) 00:13:50.419 fused_ordering(354) 00:13:50.419 fused_ordering(355) 00:13:50.419 fused_ordering(356) 00:13:50.419 fused_ordering(357) 00:13:50.419 fused_ordering(358) 00:13:50.419 fused_ordering(359) 00:13:50.419 fused_ordering(360) 00:13:50.419 fused_ordering(361) 00:13:50.419 fused_ordering(362) 00:13:50.419 fused_ordering(363) 00:13:50.419 fused_ordering(364) 00:13:50.419 fused_ordering(365) 00:13:50.419 fused_ordering(366) 00:13:50.419 fused_ordering(367) 00:13:50.419 fused_ordering(368) 00:13:50.419 fused_ordering(369) 00:13:50.419 fused_ordering(370) 00:13:50.419 fused_ordering(371) 00:13:50.420 fused_ordering(372) 00:13:50.420 fused_ordering(373) 00:13:50.420 fused_ordering(374) 00:13:50.420 fused_ordering(375) 00:13:50.420 fused_ordering(376) 00:13:50.420 fused_ordering(377) 00:13:50.420 fused_ordering(378) 00:13:50.420 fused_ordering(379) 00:13:50.420 fused_ordering(380) 00:13:50.420 fused_ordering(381) 00:13:50.420 fused_ordering(382) 00:13:50.420 fused_ordering(383) 00:13:50.420 fused_ordering(384) 00:13:50.420 fused_ordering(385) 00:13:50.420 fused_ordering(386) 00:13:50.420 fused_ordering(387) 00:13:50.420 fused_ordering(388) 00:13:50.420 fused_ordering(389) 00:13:50.420 fused_ordering(390) 00:13:50.420 fused_ordering(391) 00:13:50.420 fused_ordering(392) 00:13:50.420 fused_ordering(393) 00:13:50.420 fused_ordering(394) 00:13:50.420 fused_ordering(395) 00:13:50.420 fused_ordering(396) 00:13:50.420 fused_ordering(397) 00:13:50.420 fused_ordering(398) 00:13:50.420 fused_ordering(399) 00:13:50.420 fused_ordering(400) 00:13:50.420 fused_ordering(401) 00:13:50.420 fused_ordering(402) 00:13:50.420 fused_ordering(403) 00:13:50.420 fused_ordering(404) 00:13:50.420 fused_ordering(405) 00:13:50.420 fused_ordering(406) 00:13:50.420 fused_ordering(407) 00:13:50.420 fused_ordering(408) 00:13:50.420 fused_ordering(409) 00:13:50.420 fused_ordering(410) 00:13:50.679 fused_ordering(411) 00:13:50.679 fused_ordering(412) 00:13:50.679 fused_ordering(413) 00:13:50.679 fused_ordering(414) 00:13:50.679 fused_ordering(415) 00:13:50.679 fused_ordering(416) 00:13:50.679 fused_ordering(417) 00:13:50.679 fused_ordering(418) 00:13:50.679 fused_ordering(419) 00:13:50.679 fused_ordering(420) 00:13:50.679 fused_ordering(421) 00:13:50.679 fused_ordering(422) 00:13:50.679 fused_ordering(423) 00:13:50.679 fused_ordering(424) 00:13:50.679 fused_ordering(425) 00:13:50.679 fused_ordering(426) 00:13:50.679 fused_ordering(427) 00:13:50.679 fused_ordering(428) 00:13:50.679 fused_ordering(429) 00:13:50.679 fused_ordering(430) 00:13:50.679 fused_ordering(431) 00:13:50.679 fused_ordering(432) 00:13:50.679 fused_ordering(433) 00:13:50.679 fused_ordering(434) 00:13:50.679 fused_ordering(435) 00:13:50.679 fused_ordering(436) 00:13:50.679 fused_ordering(437) 00:13:50.679 fused_ordering(438) 00:13:50.679 fused_ordering(439) 00:13:50.679 fused_ordering(440) 00:13:50.679 fused_ordering(441) 00:13:50.679 fused_ordering(442) 00:13:50.679 fused_ordering(443) 00:13:50.679 fused_ordering(444) 00:13:50.679 fused_ordering(445) 00:13:50.679 fused_ordering(446) 00:13:50.679 fused_ordering(447) 00:13:50.679 fused_ordering(448) 00:13:50.679 fused_ordering(449) 00:13:50.679 fused_ordering(450) 00:13:50.679 fused_ordering(451) 00:13:50.679 fused_ordering(452) 00:13:50.679 fused_ordering(453) 00:13:50.679 fused_ordering(454) 00:13:50.679 fused_ordering(455) 00:13:50.679 fused_ordering(456) 00:13:50.679 fused_ordering(457) 00:13:50.679 fused_ordering(458) 00:13:50.679 fused_ordering(459) 00:13:50.679 fused_ordering(460) 00:13:50.679 fused_ordering(461) 00:13:50.679 fused_ordering(462) 00:13:50.679 fused_ordering(463) 00:13:50.679 fused_ordering(464) 00:13:50.679 fused_ordering(465) 00:13:50.679 fused_ordering(466) 00:13:50.679 fused_ordering(467) 00:13:50.679 fused_ordering(468) 00:13:50.679 fused_ordering(469) 00:13:50.679 fused_ordering(470) 00:13:50.679 fused_ordering(471) 00:13:50.679 fused_ordering(472) 00:13:50.679 fused_ordering(473) 00:13:50.679 fused_ordering(474) 00:13:50.679 fused_ordering(475) 00:13:50.679 fused_ordering(476) 00:13:50.679 fused_ordering(477) 00:13:50.679 fused_ordering(478) 00:13:50.679 fused_ordering(479) 00:13:50.679 fused_ordering(480) 00:13:50.679 fused_ordering(481) 00:13:50.679 fused_ordering(482) 00:13:50.679 fused_ordering(483) 00:13:50.679 fused_ordering(484) 00:13:50.679 fused_ordering(485) 00:13:50.679 fused_ordering(486) 00:13:50.679 fused_ordering(487) 00:13:50.679 fused_ordering(488) 00:13:50.679 fused_ordering(489) 00:13:50.679 fused_ordering(490) 00:13:50.679 fused_ordering(491) 00:13:50.679 fused_ordering(492) 00:13:50.679 fused_ordering(493) 00:13:50.679 fused_ordering(494) 00:13:50.679 fused_ordering(495) 00:13:50.679 fused_ordering(496) 00:13:50.679 fused_ordering(497) 00:13:50.679 fused_ordering(498) 00:13:50.679 fused_ordering(499) 00:13:50.679 fused_ordering(500) 00:13:50.679 fused_ordering(501) 00:13:50.679 fused_ordering(502) 00:13:50.679 fused_ordering(503) 00:13:50.679 fused_ordering(504) 00:13:50.679 fused_ordering(505) 00:13:50.679 fused_ordering(506) 00:13:50.679 fused_ordering(507) 00:13:50.679 fused_ordering(508) 00:13:50.679 fused_ordering(509) 00:13:50.679 fused_ordering(510) 00:13:50.679 fused_ordering(511) 00:13:50.679 fused_ordering(512) 00:13:50.679 fused_ordering(513) 00:13:50.679 fused_ordering(514) 00:13:50.679 fused_ordering(515) 00:13:50.679 fused_ordering(516) 00:13:50.679 fused_ordering(517) 00:13:50.679 fused_ordering(518) 00:13:50.679 fused_ordering(519) 00:13:50.679 fused_ordering(520) 00:13:50.679 fused_ordering(521) 00:13:50.679 fused_ordering(522) 00:13:50.679 fused_ordering(523) 00:13:50.679 fused_ordering(524) 00:13:50.679 fused_ordering(525) 00:13:50.679 fused_ordering(526) 00:13:50.679 fused_ordering(527) 00:13:50.679 fused_ordering(528) 00:13:50.679 fused_ordering(529) 00:13:50.679 fused_ordering(530) 00:13:50.679 fused_ordering(531) 00:13:50.679 fused_ordering(532) 00:13:50.679 fused_ordering(533) 00:13:50.679 fused_ordering(534) 00:13:50.679 fused_ordering(535) 00:13:50.679 fused_ordering(536) 00:13:50.679 fused_ordering(537) 00:13:50.679 fused_ordering(538) 00:13:50.679 fused_ordering(539) 00:13:50.679 fused_ordering(540) 00:13:50.679 fused_ordering(541) 00:13:50.679 fused_ordering(542) 00:13:50.679 fused_ordering(543) 00:13:50.679 fused_ordering(544) 00:13:50.679 fused_ordering(545) 00:13:50.679 fused_ordering(546) 00:13:50.679 fused_ordering(547) 00:13:50.679 fused_ordering(548) 00:13:50.679 fused_ordering(549) 00:13:50.679 fused_ordering(550) 00:13:50.679 fused_ordering(551) 00:13:50.679 fused_ordering(552) 00:13:50.679 fused_ordering(553) 00:13:50.679 fused_ordering(554) 00:13:50.679 fused_ordering(555) 00:13:50.679 fused_ordering(556) 00:13:50.679 fused_ordering(557) 00:13:50.679 fused_ordering(558) 00:13:50.679 fused_ordering(559) 00:13:50.679 fused_ordering(560) 00:13:50.679 fused_ordering(561) 00:13:50.679 fused_ordering(562) 00:13:50.679 fused_ordering(563) 00:13:50.679 fused_ordering(564) 00:13:50.679 fused_ordering(565) 00:13:50.679 fused_ordering(566) 00:13:50.679 fused_ordering(567) 00:13:50.679 fused_ordering(568) 00:13:50.679 fused_ordering(569) 00:13:50.679 fused_ordering(570) 00:13:50.679 fused_ordering(571) 00:13:50.679 fused_ordering(572) 00:13:50.679 fused_ordering(573) 00:13:50.679 fused_ordering(574) 00:13:50.679 fused_ordering(575) 00:13:50.679 fused_ordering(576) 00:13:50.679 fused_ordering(577) 00:13:50.679 fused_ordering(578) 00:13:50.679 fused_ordering(579) 00:13:50.679 fused_ordering(580) 00:13:50.679 fused_ordering(581) 00:13:50.679 fused_ordering(582) 00:13:50.679 fused_ordering(583) 00:13:50.679 fused_ordering(584) 00:13:50.679 fused_ordering(585) 00:13:50.679 fused_ordering(586) 00:13:50.679 fused_ordering(587) 00:13:50.679 fused_ordering(588) 00:13:50.679 fused_ordering(589) 00:13:50.679 fused_ordering(590) 00:13:50.679 fused_ordering(591) 00:13:50.679 fused_ordering(592) 00:13:50.679 fused_ordering(593) 00:13:50.679 fused_ordering(594) 00:13:50.679 fused_ordering(595) 00:13:50.679 fused_ordering(596) 00:13:50.679 fused_ordering(597) 00:13:50.679 fused_ordering(598) 00:13:50.679 fused_ordering(599) 00:13:50.679 fused_ordering(600) 00:13:50.679 fused_ordering(601) 00:13:50.679 fused_ordering(602) 00:13:50.679 fused_ordering(603) 00:13:50.679 fused_ordering(604) 00:13:50.679 fused_ordering(605) 00:13:50.679 fused_ordering(606) 00:13:50.679 fused_ordering(607) 00:13:50.679 fused_ordering(608) 00:13:50.679 fused_ordering(609) 00:13:50.679 fused_ordering(610) 00:13:50.679 fused_ordering(611) 00:13:50.679 fused_ordering(612) 00:13:50.679 fused_ordering(613) 00:13:50.679 fused_ordering(614) 00:13:50.679 fused_ordering(615) 00:13:51.247 fused_ordering(616) 00:13:51.247 fused_ordering(617) 00:13:51.247 fused_ordering(618) 00:13:51.247 fused_ordering(619) 00:13:51.247 fused_ordering(620) 00:13:51.247 fused_ordering(621) 00:13:51.247 fused_ordering(622) 00:13:51.247 fused_ordering(623) 00:13:51.247 fused_ordering(624) 00:13:51.247 fused_ordering(625) 00:13:51.247 fused_ordering(626) 00:13:51.247 fused_ordering(627) 00:13:51.247 fused_ordering(628) 00:13:51.247 fused_ordering(629) 00:13:51.247 fused_ordering(630) 00:13:51.247 fused_ordering(631) 00:13:51.247 fused_ordering(632) 00:13:51.247 fused_ordering(633) 00:13:51.247 fused_ordering(634) 00:13:51.247 fused_ordering(635) 00:13:51.247 fused_ordering(636) 00:13:51.247 fused_ordering(637) 00:13:51.247 fused_ordering(638) 00:13:51.247 fused_ordering(639) 00:13:51.247 fused_ordering(640) 00:13:51.247 fused_ordering(641) 00:13:51.247 fused_ordering(642) 00:13:51.247 fused_ordering(643) 00:13:51.247 fused_ordering(644) 00:13:51.247 fused_ordering(645) 00:13:51.247 fused_ordering(646) 00:13:51.247 fused_ordering(647) 00:13:51.247 fused_ordering(648) 00:13:51.247 fused_ordering(649) 00:13:51.247 fused_ordering(650) 00:13:51.247 fused_ordering(651) 00:13:51.247 fused_ordering(652) 00:13:51.247 fused_ordering(653) 00:13:51.247 fused_ordering(654) 00:13:51.247 fused_ordering(655) 00:13:51.247 fused_ordering(656) 00:13:51.247 fused_ordering(657) 00:13:51.247 fused_ordering(658) 00:13:51.247 fused_ordering(659) 00:13:51.248 fused_ordering(660) 00:13:51.248 fused_ordering(661) 00:13:51.248 fused_ordering(662) 00:13:51.248 fused_ordering(663) 00:13:51.248 fused_ordering(664) 00:13:51.248 fused_ordering(665) 00:13:51.248 fused_ordering(666) 00:13:51.248 fused_ordering(667) 00:13:51.248 fused_ordering(668) 00:13:51.248 fused_ordering(669) 00:13:51.248 fused_ordering(670) 00:13:51.248 fused_ordering(671) 00:13:51.248 fused_ordering(672) 00:13:51.248 fused_ordering(673) 00:13:51.248 fused_ordering(674) 00:13:51.248 fused_ordering(675) 00:13:51.248 fused_ordering(676) 00:13:51.248 fused_ordering(677) 00:13:51.248 fused_ordering(678) 00:13:51.248 fused_ordering(679) 00:13:51.248 fused_ordering(680) 00:13:51.248 fused_ordering(681) 00:13:51.248 fused_ordering(682) 00:13:51.248 fused_ordering(683) 00:13:51.248 fused_ordering(684) 00:13:51.248 fused_ordering(685) 00:13:51.248 fused_ordering(686) 00:13:51.248 fused_ordering(687) 00:13:51.248 fused_ordering(688) 00:13:51.248 fused_ordering(689) 00:13:51.248 fused_ordering(690) 00:13:51.248 fused_ordering(691) 00:13:51.248 fused_ordering(692) 00:13:51.248 fused_ordering(693) 00:13:51.248 fused_ordering(694) 00:13:51.248 fused_ordering(695) 00:13:51.248 fused_ordering(696) 00:13:51.248 fused_ordering(697) 00:13:51.248 fused_ordering(698) 00:13:51.248 fused_ordering(699) 00:13:51.248 fused_ordering(700) 00:13:51.248 fused_ordering(701) 00:13:51.248 fused_ordering(702) 00:13:51.248 fused_ordering(703) 00:13:51.248 fused_ordering(704) 00:13:51.248 fused_ordering(705) 00:13:51.248 fused_ordering(706) 00:13:51.248 fused_ordering(707) 00:13:51.248 fused_ordering(708) 00:13:51.248 fused_ordering(709) 00:13:51.248 fused_ordering(710) 00:13:51.248 fused_ordering(711) 00:13:51.248 fused_ordering(712) 00:13:51.248 fused_ordering(713) 00:13:51.248 fused_ordering(714) 00:13:51.248 fused_ordering(715) 00:13:51.248 fused_ordering(716) 00:13:51.248 fused_ordering(717) 00:13:51.248 fused_ordering(718) 00:13:51.248 fused_ordering(719) 00:13:51.248 fused_ordering(720) 00:13:51.248 fused_ordering(721) 00:13:51.248 fused_ordering(722) 00:13:51.248 fused_ordering(723) 00:13:51.248 fused_ordering(724) 00:13:51.248 fused_ordering(725) 00:13:51.248 fused_ordering(726) 00:13:51.248 fused_ordering(727) 00:13:51.248 fused_ordering(728) 00:13:51.248 fused_ordering(729) 00:13:51.248 fused_ordering(730) 00:13:51.248 fused_ordering(731) 00:13:51.248 fused_ordering(732) 00:13:51.248 fused_ordering(733) 00:13:51.248 fused_ordering(734) 00:13:51.248 fused_ordering(735) 00:13:51.248 fused_ordering(736) 00:13:51.248 fused_ordering(737) 00:13:51.248 fused_ordering(738) 00:13:51.248 fused_ordering(739) 00:13:51.248 fused_ordering(740) 00:13:51.248 fused_ordering(741) 00:13:51.248 fused_ordering(742) 00:13:51.248 fused_ordering(743) 00:13:51.248 fused_ordering(744) 00:13:51.248 fused_ordering(745) 00:13:51.248 fused_ordering(746) 00:13:51.248 fused_ordering(747) 00:13:51.248 fused_ordering(748) 00:13:51.248 fused_ordering(749) 00:13:51.248 fused_ordering(750) 00:13:51.248 fused_ordering(751) 00:13:51.248 fused_ordering(752) 00:13:51.248 fused_ordering(753) 00:13:51.248 fused_ordering(754) 00:13:51.248 fused_ordering(755) 00:13:51.248 fused_ordering(756) 00:13:51.248 fused_ordering(757) 00:13:51.248 fused_ordering(758) 00:13:51.248 fused_ordering(759) 00:13:51.248 fused_ordering(760) 00:13:51.248 fused_ordering(761) 00:13:51.248 fused_ordering(762) 00:13:51.248 fused_ordering(763) 00:13:51.248 fused_ordering(764) 00:13:51.248 fused_ordering(765) 00:13:51.248 fused_ordering(766) 00:13:51.248 fused_ordering(767) 00:13:51.248 fused_ordering(768) 00:13:51.248 fused_ordering(769) 00:13:51.248 fused_ordering(770) 00:13:51.248 fused_ordering(771) 00:13:51.248 fused_ordering(772) 00:13:51.248 fused_ordering(773) 00:13:51.248 fused_ordering(774) 00:13:51.248 fused_ordering(775) 00:13:51.248 fused_ordering(776) 00:13:51.248 fused_ordering(777) 00:13:51.248 fused_ordering(778) 00:13:51.248 fused_ordering(779) 00:13:51.248 fused_ordering(780) 00:13:51.248 fused_ordering(781) 00:13:51.248 fused_ordering(782) 00:13:51.248 fused_ordering(783) 00:13:51.248 fused_ordering(784) 00:13:51.248 fused_ordering(785) 00:13:51.248 fused_ordering(786) 00:13:51.248 fused_ordering(787) 00:13:51.248 fused_ordering(788) 00:13:51.248 fused_ordering(789) 00:13:51.248 fused_ordering(790) 00:13:51.248 fused_ordering(791) 00:13:51.248 fused_ordering(792) 00:13:51.248 fused_ordering(793) 00:13:51.248 fused_ordering(794) 00:13:51.248 fused_ordering(795) 00:13:51.248 fused_ordering(796) 00:13:51.248 fused_ordering(797) 00:13:51.248 fused_ordering(798) 00:13:51.248 fused_ordering(799) 00:13:51.248 fused_ordering(800) 00:13:51.248 fused_ordering(801) 00:13:51.248 fused_ordering(802) 00:13:51.248 fused_ordering(803) 00:13:51.248 fused_ordering(804) 00:13:51.248 fused_ordering(805) 00:13:51.248 fused_ordering(806) 00:13:51.248 fused_ordering(807) 00:13:51.248 fused_ordering(808) 00:13:51.248 fused_ordering(809) 00:13:51.248 fused_ordering(810) 00:13:51.248 fused_ordering(811) 00:13:51.248 fused_ordering(812) 00:13:51.248 fused_ordering(813) 00:13:51.248 fused_ordering(814) 00:13:51.248 fused_ordering(815) 00:13:51.248 fused_ordering(816) 00:13:51.248 fused_ordering(817) 00:13:51.248 fused_ordering(818) 00:13:51.248 fused_ordering(819) 00:13:51.248 fused_ordering(820) 00:13:51.508 fused_ordering(821) 00:13:51.508 fused_ordering(822) 00:13:51.508 fused_ordering(823) 00:13:51.508 fused_ordering(824) 00:13:51.508 fused_ordering(825) 00:13:51.508 fused_ordering(826) 00:13:51.508 fused_ordering(827) 00:13:51.508 fused_ordering(828) 00:13:51.508 fused_ordering(829) 00:13:51.508 fused_ordering(830) 00:13:51.508 fused_ordering(831) 00:13:51.508 fused_ordering(832) 00:13:51.508 fused_ordering(833) 00:13:51.508 fused_ordering(834) 00:13:51.508 fused_ordering(835) 00:13:51.508 fused_ordering(836) 00:13:51.508 fused_ordering(837) 00:13:51.508 fused_ordering(838) 00:13:51.508 fused_ordering(839) 00:13:51.508 fused_ordering(840) 00:13:51.508 fused_ordering(841) 00:13:51.508 fused_ordering(842) 00:13:51.508 fused_ordering(843) 00:13:51.508 fused_ordering(844) 00:13:51.508 fused_ordering(845) 00:13:51.508 fused_ordering(846) 00:13:51.508 fused_ordering(847) 00:13:51.508 fused_ordering(848) 00:13:51.508 fused_ordering(849) 00:13:51.508 fused_ordering(850) 00:13:51.508 fused_ordering(851) 00:13:51.508 fused_ordering(852) 00:13:51.508 fused_ordering(853) 00:13:51.508 fused_ordering(854) 00:13:51.508 fused_ordering(855) 00:13:51.508 fused_ordering(856) 00:13:51.508 fused_ordering(857) 00:13:51.508 fused_ordering(858) 00:13:51.508 fused_ordering(859) 00:13:51.508 fused_ordering(860) 00:13:51.508 fused_ordering(861) 00:13:51.508 fused_ordering(862) 00:13:51.508 fused_ordering(863) 00:13:51.508 fused_ordering(864) 00:13:51.508 fused_ordering(865) 00:13:51.508 fused_ordering(866) 00:13:51.508 fused_ordering(867) 00:13:51.508 fused_ordering(868) 00:13:51.508 fused_ordering(869) 00:13:51.508 fused_ordering(870) 00:13:51.508 fused_ordering(871) 00:13:51.508 fused_ordering(872) 00:13:51.508 fused_ordering(873) 00:13:51.508 fused_ordering(874) 00:13:51.508 fused_ordering(875) 00:13:51.508 fused_ordering(876) 00:13:51.508 fused_ordering(877) 00:13:51.508 fused_ordering(878) 00:13:51.508 fused_ordering(879) 00:13:51.508 fused_ordering(880) 00:13:51.508 fused_ordering(881) 00:13:51.508 fused_ordering(882) 00:13:51.508 fused_ordering(883) 00:13:51.508 fused_ordering(884) 00:13:51.508 fused_ordering(885) 00:13:51.508 fused_ordering(886) 00:13:51.508 fused_ordering(887) 00:13:51.508 fused_ordering(888) 00:13:51.508 fused_ordering(889) 00:13:51.508 fused_ordering(890) 00:13:51.508 fused_ordering(891) 00:13:51.508 fused_ordering(892) 00:13:51.508 fused_ordering(893) 00:13:51.508 fused_ordering(894) 00:13:51.508 fused_ordering(895) 00:13:51.508 fused_ordering(896) 00:13:51.508 fused_ordering(897) 00:13:51.508 fused_ordering(898) 00:13:51.508 fused_ordering(899) 00:13:51.508 fused_ordering(900) 00:13:51.508 fused_ordering(901) 00:13:51.508 fused_ordering(902) 00:13:51.508 fused_ordering(903) 00:13:51.508 fused_ordering(904) 00:13:51.508 fused_ordering(905) 00:13:51.508 fused_ordering(906) 00:13:51.508 fused_ordering(907) 00:13:51.508 fused_ordering(908) 00:13:51.508 fused_ordering(909) 00:13:51.508 fused_ordering(910) 00:13:51.508 fused_ordering(911) 00:13:51.508 fused_ordering(912) 00:13:51.508 fused_ordering(913) 00:13:51.508 fused_ordering(914) 00:13:51.508 fused_ordering(915) 00:13:51.508 fused_ordering(916) 00:13:51.508 fused_ordering(917) 00:13:51.508 fused_ordering(918) 00:13:51.508 fused_ordering(919) 00:13:51.508 fused_ordering(920) 00:13:51.508 fused_ordering(921) 00:13:51.508 fused_ordering(922) 00:13:51.508 fused_ordering(923) 00:13:51.508 fused_ordering(924) 00:13:51.508 fused_ordering(925) 00:13:51.508 fused_ordering(926) 00:13:51.508 fused_ordering(927) 00:13:51.508 fused_ordering(928) 00:13:51.508 fused_ordering(929) 00:13:51.508 fused_ordering(930) 00:13:51.508 fused_ordering(931) 00:13:51.508 fused_ordering(932) 00:13:51.508 fused_ordering(933) 00:13:51.508 fused_ordering(934) 00:13:51.508 fused_ordering(935) 00:13:51.508 fused_ordering(936) 00:13:51.508 fused_ordering(937) 00:13:51.508 fused_ordering(938) 00:13:51.508 fused_ordering(939) 00:13:51.508 fused_ordering(940) 00:13:51.508 fused_ordering(941) 00:13:51.508 fused_ordering(942) 00:13:51.508 fused_ordering(943) 00:13:51.508 fused_ordering(944) 00:13:51.508 fused_ordering(945) 00:13:51.508 fused_ordering(946) 00:13:51.508 fused_ordering(947) 00:13:51.508 fused_ordering(948) 00:13:51.508 fused_ordering(949) 00:13:51.508 fused_ordering(950) 00:13:51.508 fused_ordering(951) 00:13:51.508 fused_ordering(952) 00:13:51.508 fused_ordering(953) 00:13:51.508 fused_ordering(954) 00:13:51.508 fused_ordering(955) 00:13:51.508 fused_ordering(956) 00:13:51.508 fused_ordering(957) 00:13:51.508 fused_ordering(958) 00:13:51.508 fused_ordering(959) 00:13:51.508 fused_ordering(960) 00:13:51.508 fused_ordering(961) 00:13:51.508 fused_ordering(962) 00:13:51.508 fused_ordering(963) 00:13:51.508 fused_ordering(964) 00:13:51.508 fused_ordering(965) 00:13:51.508 fused_ordering(966) 00:13:51.508 fused_ordering(967) 00:13:51.508 fused_ordering(968) 00:13:51.508 fused_ordering(969) 00:13:51.508 fused_ordering(970) 00:13:51.508 fused_ordering(971) 00:13:51.508 fused_ordering(972) 00:13:51.508 fused_ordering(973) 00:13:51.508 fused_ordering(974) 00:13:51.508 fused_ordering(975) 00:13:51.508 fused_ordering(976) 00:13:51.508 fused_ordering(977) 00:13:51.508 fused_ordering(978) 00:13:51.508 fused_ordering(979) 00:13:51.508 fused_ordering(980) 00:13:51.508 fused_ordering(981) 00:13:51.508 fused_ordering(982) 00:13:51.508 fused_ordering(983) 00:13:51.508 fused_ordering(984) 00:13:51.508 fused_ordering(985) 00:13:51.508 fused_ordering(986) 00:13:51.508 fused_ordering(987) 00:13:51.508 fused_ordering(988) 00:13:51.508 fused_ordering(989) 00:13:51.508 fused_ordering(990) 00:13:51.508 fused_ordering(991) 00:13:51.508 fused_ordering(992) 00:13:51.508 fused_ordering(993) 00:13:51.508 fused_ordering(994) 00:13:51.508 fused_ordering(995) 00:13:51.508 fused_ordering(996) 00:13:51.508 fused_ordering(997) 00:13:51.508 fused_ordering(998) 00:13:51.508 fused_ordering(999) 00:13:51.508 fused_ordering(1000) 00:13:51.508 fused_ordering(1001) 00:13:51.508 fused_ordering(1002) 00:13:51.508 fused_ordering(1003) 00:13:51.508 fused_ordering(1004) 00:13:51.508 fused_ordering(1005) 00:13:51.508 fused_ordering(1006) 00:13:51.508 fused_ordering(1007) 00:13:51.508 fused_ordering(1008) 00:13:51.508 fused_ordering(1009) 00:13:51.508 fused_ordering(1010) 00:13:51.508 fused_ordering(1011) 00:13:51.508 fused_ordering(1012) 00:13:51.508 fused_ordering(1013) 00:13:51.508 fused_ordering(1014) 00:13:51.508 fused_ordering(1015) 00:13:51.508 fused_ordering(1016) 00:13:51.508 fused_ordering(1017) 00:13:51.508 fused_ordering(1018) 00:13:51.508 fused_ordering(1019) 00:13:51.508 fused_ordering(1020) 00:13:51.508 fused_ordering(1021) 00:13:51.508 fused_ordering(1022) 00:13:51.508 fused_ordering(1023) 00:13:51.508 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:51.508 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:51.508 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:51.508 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:51.768 rmmod nvme_tcp 00:13:51.768 rmmod nvme_fabrics 00:13:51.768 rmmod nvme_keyring 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 2052462 ']' 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 2052462 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2052462 ']' 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2052462 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2052462 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2052462' 00:13:51.768 killing process with pid 2052462 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2052462 00:13:51.768 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2052462 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.027 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:53.933 00:13:53.933 real 0m10.676s 00:13:53.933 user 0m4.915s 00:13:53.933 sys 0m5.835s 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:53.933 ************************************ 00:13:53.933 END TEST nvmf_fused_ordering 00:13:53.933 ************************************ 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.933 ************************************ 00:13:53.933 START TEST nvmf_ns_masking 00:13:53.933 ************************************ 00:13:53.933 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:54.192 * Looking for test storage... 00:13:54.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:54.192 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.193 --rc genhtml_branch_coverage=1 00:13:54.193 --rc genhtml_function_coverage=1 00:13:54.193 --rc genhtml_legend=1 00:13:54.193 --rc geninfo_all_blocks=1 00:13:54.193 --rc geninfo_unexecuted_blocks=1 00:13:54.193 00:13:54.193 ' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.193 --rc genhtml_branch_coverage=1 00:13:54.193 --rc genhtml_function_coverage=1 00:13:54.193 --rc genhtml_legend=1 00:13:54.193 --rc geninfo_all_blocks=1 00:13:54.193 --rc geninfo_unexecuted_blocks=1 00:13:54.193 00:13:54.193 ' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.193 --rc genhtml_branch_coverage=1 00:13:54.193 --rc genhtml_function_coverage=1 00:13:54.193 --rc genhtml_legend=1 00:13:54.193 --rc geninfo_all_blocks=1 00:13:54.193 --rc geninfo_unexecuted_blocks=1 00:13:54.193 00:13:54.193 ' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:54.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:54.193 --rc genhtml_branch_coverage=1 00:13:54.193 --rc genhtml_function_coverage=1 00:13:54.193 --rc genhtml_legend=1 00:13:54.193 --rc geninfo_all_blocks=1 00:13:54.193 --rc geninfo_unexecuted_blocks=1 00:13:54.193 00:13:54.193 ' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:54.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3adbd9c3-b6f4-4621-a2d9-3deec0d2882a 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7c16f908-a4b9-4d92-90a7-c229384609ff 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=ef838038-aa4f-4c11-bfe4-a368ad0c9734 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:54.193 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:00.766 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:00.766 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:00.766 Found net devices under 0000:86:00.0: cvl_0_0 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:00.766 Found net devices under 0000:86:00.1: cvl_0_1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.766 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:14:00.767 00:14:00.767 --- 10.0.0.2 ping statistics --- 00:14:00.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.767 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:14:00.767 00:14:00.767 --- 10.0.0.1 ping statistics --- 00:14:00.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.767 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=2056247 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 2056247 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2056247 ']' 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.767 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.767 [2024-10-17 19:21:23.944958] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:00.767 [2024-10-17 19:21:23.945006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.767 [2024-10-17 19:21:24.023685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.767 [2024-10-17 19:21:24.064864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.767 [2024-10-17 19:21:24.064899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.767 [2024-10-17 19:21:24.064906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.767 [2024-10-17 19:21:24.064912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.767 [2024-10-17 19:21:24.064918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.767 [2024-10-17 19:21:24.065450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:00.767 [2024-10-17 19:21:24.361120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:00.767 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:01.026 Malloc1 00:14:01.026 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:01.026 Malloc2 00:14:01.026 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.285 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:01.544 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.802 [2024-10-17 19:21:25.335843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef838038-aa4f-4c11-bfe4-a368ad0c9734 -a 10.0.0.2 -s 4420 -i 4 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:01.802 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.337 [ 0]:0x1 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc3b74768afa478db3258ccbde42f989 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc3b74768afa478db3258ccbde42f989 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:04.337 [ 0]:0x1 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc3b74768afa478db3258ccbde42f989 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc3b74768afa478db3258ccbde42f989 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:04.337 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:04.337 [ 1]:0x2 00:14:04.338 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:04.338 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:04.338 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:04.338 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:04.338 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:04.338 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.338 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.595 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef838038-aa4f-4c11-bfe4-a368ad0c9734 -a 10.0.0.2 -s 4420 -i 4 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:04.854 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:07.393 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.394 [ 0]:0x2 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.394 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.394 [ 0]:0x1 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc3b74768afa478db3258ccbde42f989 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc3b74768afa478db3258ccbde42f989 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.394 [ 1]:0x2 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.394 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:07.653 [ 0]:0x2 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:07.653 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:07.912 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I ef838038-aa4f-4c11-bfe4-a368ad0c9734 -a 10.0.0.2 -s 4420 -i 4 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:08.171 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:10.076 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:10.334 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:10.334 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:10.334 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:10.334 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.334 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.334 [ 0]:0x1 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc3b74768afa478db3258ccbde42f989 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc3b74768afa478db3258ccbde42f989 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.334 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.593 [ 1]:0x2 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.593 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.851 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.851 [ 0]:0x2 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:10.852 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:11.111 [2024-10-17 19:21:34.650040] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:11.111 request: 00:14:11.111 { 00:14:11.111 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:11.111 "nsid": 2, 00:14:11.111 "host": "nqn.2016-06.io.spdk:host1", 00:14:11.111 "method": "nvmf_ns_remove_host", 00:14:11.111 "req_id": 1 00:14:11.111 } 00:14:11.111 Got JSON-RPC error response 00:14:11.111 response: 00:14:11.111 { 00:14:11.111 "code": -32602, 00:14:11.111 "message": "Invalid parameters" 00:14:11.111 } 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:11.111 [ 0]:0x2 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=97878b9ceabd47a58f9338bfabaaa039 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 97878b9ceabd47a58f9338bfabaaa039 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2058245 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2058245 /var/tmp/host.sock 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2058245 ']' 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:11.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.111 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.111 [2024-10-17 19:21:34.860453] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:11.111 [2024-10-17 19:21:34.860497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058245 ] 00:14:11.370 [2024-10-17 19:21:34.935274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.370 [2024-10-17 19:21:34.975129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.629 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.629 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:11.629 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.629 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:11.888 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3adbd9c3-b6f4-4621-a2d9-3deec0d2882a 00:14:11.888 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:11.888 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3ADBD9C3B6F44621A2D93DEEC0D2882A -i 00:14:12.148 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7c16f908-a4b9-4d92-90a7-c229384609ff 00:14:12.148 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:14:12.148 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7C16F908A4B94D9290A7C229384609FF -i 00:14:12.407 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:12.666 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:12.666 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.666 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:12.925 nvme0n1 00:14:12.925 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:12.925 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:13.493 nvme1n2 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:13.493 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:13.750 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3adbd9c3-b6f4-4621-a2d9-3deec0d2882a == \3\a\d\b\d\9\c\3\-\b\6\f\4\-\4\6\2\1\-\a\2\d\9\-\3\d\e\e\c\0\d\2\8\8\2\a ]] 00:14:13.750 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:13.750 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:13.750 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7c16f908-a4b9-4d92-90a7-c229384609ff == \7\c\1\6\f\9\0\8\-\a\4\b\9\-\4\d\9\2\-\9\0\a\7\-\c\2\2\9\3\8\4\6\0\9\f\f ]] 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2058245 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2058245 ']' 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2058245 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2058245 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2058245' 00:14:14.009 killing process with pid 2058245 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2058245 00:14:14.009 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2058245 00:14:14.268 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.526 rmmod nvme_tcp 00:14:14.526 rmmod nvme_fabrics 00:14:14.526 rmmod nvme_keyring 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 2056247 ']' 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 2056247 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2056247 ']' 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2056247 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.526 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2056247 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2056247' 00:14:14.784 killing process with pid 2056247 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2056247 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2056247 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.784 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:17.321 00:14:17.321 real 0m22.875s 00:14:17.321 user 0m24.236s 00:14:17.321 sys 0m6.683s 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:17.321 ************************************ 00:14:17.321 END TEST nvmf_ns_masking 00:14:17.321 ************************************ 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.321 ************************************ 00:14:17.321 START TEST nvmf_nvme_cli 00:14:17.321 ************************************ 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:17.321 * Looking for test storage... 00:14:17.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.321 --rc genhtml_branch_coverage=1 00:14:17.321 --rc genhtml_function_coverage=1 00:14:17.321 --rc genhtml_legend=1 00:14:17.321 --rc geninfo_all_blocks=1 00:14:17.321 --rc geninfo_unexecuted_blocks=1 00:14:17.321 00:14:17.321 ' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.321 --rc genhtml_branch_coverage=1 00:14:17.321 --rc genhtml_function_coverage=1 00:14:17.321 --rc genhtml_legend=1 00:14:17.321 --rc geninfo_all_blocks=1 00:14:17.321 --rc geninfo_unexecuted_blocks=1 00:14:17.321 00:14:17.321 ' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.321 --rc genhtml_branch_coverage=1 00:14:17.321 --rc genhtml_function_coverage=1 00:14:17.321 --rc genhtml_legend=1 00:14:17.321 --rc geninfo_all_blocks=1 00:14:17.321 --rc geninfo_unexecuted_blocks=1 00:14:17.321 00:14:17.321 ' 00:14:17.321 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.321 --rc genhtml_branch_coverage=1 00:14:17.321 --rc genhtml_function_coverage=1 00:14:17.321 --rc genhtml_legend=1 00:14:17.321 --rc geninfo_all_blocks=1 00:14:17.322 --rc geninfo_unexecuted_blocks=1 00:14:17.322 00:14:17.322 ' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.322 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:23.894 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:23.894 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:23.894 Found net devices under 0000:86:00.0: cvl_0_0 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:23.894 Found net devices under 0000:86:00.1: cvl_0_1 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.894 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:23.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:14:23.895 00:14:23.895 --- 10.0.0.2 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:14:23.895 00:14:23.895 --- 10.0.0.1 ping statistics --- 00:14:23.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.895 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=2062297 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 2062297 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2062297 ']' 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.895 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 [2024-10-17 19:21:46.848771] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:23.895 [2024-10-17 19:21:46.848825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.895 [2024-10-17 19:21:46.927891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:23.895 [2024-10-17 19:21:46.970903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.895 [2024-10-17 19:21:46.970940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.895 [2024-10-17 19:21:46.970948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.895 [2024-10-17 19:21:46.970954] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.895 [2024-10-17 19:21:46.970958] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.895 [2024-10-17 19:21:46.972465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.895 [2024-10-17 19:21:46.972576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:23.895 [2024-10-17 19:21:46.972682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.895 [2024-10-17 19:21:46.972683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 [2024-10-17 19:21:47.108420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 Malloc0 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 Malloc1 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 [2024-10-17 19:21:47.207175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.895 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:23.895 00:14:23.895 Discovery Log Number of Records 2, Generation counter 2 00:14:23.895 =====Discovery Log Entry 0====== 00:14:23.895 trtype: tcp 00:14:23.895 adrfam: ipv4 00:14:23.895 subtype: current discovery subsystem 00:14:23.895 treq: not required 00:14:23.895 portid: 0 00:14:23.895 trsvcid: 4420 00:14:23.896 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:23.896 traddr: 10.0.0.2 00:14:23.896 eflags: explicit discovery connections, duplicate discovery information 00:14:23.896 sectype: none 00:14:23.896 =====Discovery Log Entry 1====== 00:14:23.896 trtype: tcp 00:14:23.896 adrfam: ipv4 00:14:23.896 subtype: nvme subsystem 00:14:23.896 treq: not required 00:14:23.896 portid: 0 00:14:23.896 trsvcid: 4420 00:14:23.896 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:23.896 traddr: 10.0.0.2 00:14:23.896 eflags: none 00:14:23.896 sectype: none 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:23.896 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:24.832 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.736 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:26.995 /dev/nvme0n2 ]] 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:26.995 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:27.254 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:27.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:27.514 rmmod nvme_tcp 00:14:27.514 rmmod nvme_fabrics 00:14:27.514 rmmod nvme_keyring 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 2062297 ']' 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 2062297 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2062297 ']' 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2062297 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2062297 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2062297' 00:14:27.514 killing process with pid 2062297 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2062297 00:14:27.514 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2062297 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.773 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.308 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.308 00:14:30.308 real 0m12.864s 00:14:30.308 user 0m19.500s 00:14:30.308 sys 0m5.107s 00:14:30.308 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.308 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.309 ************************************ 00:14:30.309 END TEST nvmf_nvme_cli 00:14:30.309 ************************************ 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.309 ************************************ 00:14:30.309 START TEST nvmf_vfio_user 00:14:30.309 ************************************ 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:30.309 * Looking for test storage... 00:14:30.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.309 --rc genhtml_branch_coverage=1 00:14:30.309 --rc genhtml_function_coverage=1 00:14:30.309 --rc genhtml_legend=1 00:14:30.309 --rc geninfo_all_blocks=1 00:14:30.309 --rc geninfo_unexecuted_blocks=1 00:14:30.309 00:14:30.309 ' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.309 --rc genhtml_branch_coverage=1 00:14:30.309 --rc genhtml_function_coverage=1 00:14:30.309 --rc genhtml_legend=1 00:14:30.309 --rc geninfo_all_blocks=1 00:14:30.309 --rc geninfo_unexecuted_blocks=1 00:14:30.309 00:14:30.309 ' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.309 --rc genhtml_branch_coverage=1 00:14:30.309 --rc genhtml_function_coverage=1 00:14:30.309 --rc genhtml_legend=1 00:14:30.309 --rc geninfo_all_blocks=1 00:14:30.309 --rc geninfo_unexecuted_blocks=1 00:14:30.309 00:14:30.309 ' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:30.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.309 --rc genhtml_branch_coverage=1 00:14:30.309 --rc genhtml_function_coverage=1 00:14:30.309 --rc genhtml_legend=1 00:14:30.309 --rc geninfo_all_blocks=1 00:14:30.309 --rc geninfo_unexecuted_blocks=1 00:14:30.309 00:14:30.309 ' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.309 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2063565 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2063565' 00:14:30.310 Process pid: 2063565 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2063565 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2063565 ']' 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:30.310 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.310 [2024-10-17 19:21:53.857165] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:30.310 [2024-10-17 19:21:53.857215] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.310 [2024-10-17 19:21:53.935345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.310 [2024-10-17 19:21:53.975629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.310 [2024-10-17 19:21:53.975668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.310 [2024-10-17 19:21:53.975675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.310 [2024-10-17 19:21:53.975681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.310 [2024-10-17 19:21:53.975686] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.310 [2024-10-17 19:21:53.977168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.310 [2024-10-17 19:21:53.977280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.310 [2024-10-17 19:21:53.977362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.310 [2024-10-17 19:21:53.977363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.310 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.310 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:30.310 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:31.689 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:31.947 Malloc1 00:14:31.947 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:31.947 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:32.206 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:32.466 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.466 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:32.466 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.725 Malloc2 00:14:32.725 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:32.984 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:32.984 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:33.243 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:33.243 [2024-10-17 19:21:56.963799] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:33.243 [2024-10-17 19:21:56.963832] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064220 ] 00:14:33.243 [2024-10-17 19:21:57.005077] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:33.243 [2024-10-17 19:21:57.007369] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.243 [2024-10-17 19:21:57.007387] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff70f4b2000 00:14:33.243 [2024-10-17 19:21:57.008363] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.009360] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.010366] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.011374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.012389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.013383] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.014384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.015394] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.243 [2024-10-17 19:21:57.016399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.243 [2024-10-17 19:21:57.016411] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff70f4a7000 00:14:33.243 [2024-10-17 19:21:57.017326] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.505 [2024-10-17 19:21:57.029867] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:33.505 [2024-10-17 19:21:57.029894] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:33.505 [2024-10-17 19:21:57.035501] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.505 [2024-10-17 19:21:57.035536] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:33.505 [2024-10-17 19:21:57.035611] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:33.505 [2024-10-17 19:21:57.035628] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:33.505 [2024-10-17 19:21:57.035634] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:33.505 [2024-10-17 19:21:57.036501] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:33.505 [2024-10-17 19:21:57.036509] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:33.505 [2024-10-17 19:21:57.036519] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:33.505 [2024-10-17 19:21:57.037505] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.505 [2024-10-17 19:21:57.037513] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:33.505 [2024-10-17 19:21:57.037519] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.038507] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:33.505 [2024-10-17 19:21:57.038517] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.039513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:33.505 [2024-10-17 19:21:57.039520] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:33.505 [2024-10-17 19:21:57.039525] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.039531] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.039636] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:33.505 [2024-10-17 19:21:57.039640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.039645] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:33.505 [2024-10-17 19:21:57.040514] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:33.505 [2024-10-17 19:21:57.041518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:33.505 [2024-10-17 19:21:57.042527] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.505 [2024-10-17 19:21:57.043528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.505 [2024-10-17 19:21:57.043605] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:33.505 [2024-10-17 19:21:57.044539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:33.505 [2024-10-17 19:21:57.044545] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:33.505 [2024-10-17 19:21:57.044549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044566] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:33.505 [2024-10-17 19:21:57.044573] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044589] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.505 [2024-10-17 19:21:57.044593] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.505 [2024-10-17 19:21:57.044599] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.505 [2024-10-17 19:21:57.044615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.505 [2024-10-17 19:21:57.044663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:33.505 [2024-10-17 19:21:57.044671] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:33.505 [2024-10-17 19:21:57.044675] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:33.505 [2024-10-17 19:21:57.044679] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:33.505 [2024-10-17 19:21:57.044683] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:33.505 [2024-10-17 19:21:57.044687] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:33.505 [2024-10-17 19:21:57.044691] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:33.505 [2024-10-17 19:21:57.044695] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044711] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:33.505 [2024-10-17 19:21:57.044726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:33.505 [2024-10-17 19:21:57.044736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.505 [2024-10-17 19:21:57.044744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.505 [2024-10-17 19:21:57.044751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.505 [2024-10-17 19:21:57.044758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.505 [2024-10-17 19:21:57.044762] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044769] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:33.505 [2024-10-17 19:21:57.044787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:33.505 [2024-10-17 19:21:57.044792] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:33.505 [2024-10-17 19:21:57.044797] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044802] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.505 [2024-10-17 19:21:57.044831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:33.505 [2024-10-17 19:21:57.044880] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044887] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044894] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:33.505 [2024-10-17 19:21:57.044898] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:33.505 [2024-10-17 19:21:57.044901] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.505 [2024-10-17 19:21:57.044906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:33.505 [2024-10-17 19:21:57.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:33.505 [2024-10-17 19:21:57.044927] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:33.505 [2024-10-17 19:21:57.044938] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044945] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:33.505 [2024-10-17 19:21:57.044951] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.505 [2024-10-17 19:21:57.044954] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.506 [2024-10-17 19:21:57.044957] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.506 [2024-10-17 19:21:57.044963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.044987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.044997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045004] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045009] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.506 [2024-10-17 19:21:57.045013] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.506 [2024-10-17 19:21:57.045016] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.506 [2024-10-17 19:21:57.045021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045040] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045045] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045063] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045067] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045071] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:33.506 [2024-10-17 19:21:57.045075] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:33.506 [2024-10-17 19:21:57.045080] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:33.506 [2024-10-17 19:21:57.045097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045131] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045153] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045176] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:33.506 [2024-10-17 19:21:57.045181] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:33.506 [2024-10-17 19:21:57.045183] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:33.506 [2024-10-17 19:21:57.045186] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:33.506 [2024-10-17 19:21:57.045189] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:33.506 [2024-10-17 19:21:57.045195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:33.506 [2024-10-17 19:21:57.045201] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:33.506 [2024-10-17 19:21:57.045205] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:33.506 [2024-10-17 19:21:57.045208] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.506 [2024-10-17 19:21:57.045213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045219] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:33.506 [2024-10-17 19:21:57.045223] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.506 [2024-10-17 19:21:57.045226] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.506 [2024-10-17 19:21:57.045232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045239] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:33.506 [2024-10-17 19:21:57.045242] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:33.506 [2024-10-17 19:21:57.045245] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.506 [2024-10-17 19:21:57.045251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:33.506 [2024-10-17 19:21:57.045257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:33.506 [2024-10-17 19:21:57.045283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:33.506 ===================================================== 00:14:33.506 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:33.506 ===================================================== 00:14:33.506 Controller Capabilities/Features 00:14:33.506 ================================ 00:14:33.506 Vendor ID: 4e58 00:14:33.506 Subsystem Vendor ID: 4e58 00:14:33.506 Serial Number: SPDK1 00:14:33.506 Model Number: SPDK bdev Controller 00:14:33.506 Firmware Version: 25.01 00:14:33.506 Recommended Arb Burst: 6 00:14:33.506 IEEE OUI Identifier: 8d 6b 50 00:14:33.506 Multi-path I/O 00:14:33.506 May have multiple subsystem ports: Yes 00:14:33.506 May have multiple controllers: Yes 00:14:33.506 Associated with SR-IOV VF: No 00:14:33.506 Max Data Transfer Size: 131072 00:14:33.506 Max Number of Namespaces: 32 00:14:33.506 Max Number of I/O Queues: 127 00:14:33.506 NVMe Specification Version (VS): 1.3 00:14:33.506 NVMe Specification Version (Identify): 1.3 00:14:33.506 Maximum Queue Entries: 256 00:14:33.506 Contiguous Queues Required: Yes 00:14:33.506 Arbitration Mechanisms Supported 00:14:33.506 Weighted Round Robin: Not Supported 00:14:33.506 Vendor Specific: Not Supported 00:14:33.506 Reset Timeout: 15000 ms 00:14:33.506 Doorbell Stride: 4 bytes 00:14:33.506 NVM Subsystem Reset: Not Supported 00:14:33.506 Command Sets Supported 00:14:33.506 NVM Command Set: Supported 00:14:33.506 Boot Partition: Not Supported 00:14:33.506 Memory Page Size Minimum: 4096 bytes 00:14:33.506 Memory Page Size Maximum: 4096 bytes 00:14:33.506 Persistent Memory Region: Not Supported 00:14:33.506 Optional Asynchronous Events Supported 00:14:33.506 Namespace Attribute Notices: Supported 00:14:33.506 Firmware Activation Notices: Not Supported 00:14:33.506 ANA Change Notices: Not Supported 00:14:33.506 PLE Aggregate Log Change Notices: Not Supported 00:14:33.506 LBA Status Info Alert Notices: Not Supported 00:14:33.506 EGE Aggregate Log Change Notices: Not Supported 00:14:33.506 Normal NVM Subsystem Shutdown event: Not Supported 00:14:33.506 Zone Descriptor Change Notices: Not Supported 00:14:33.506 Discovery Log Change Notices: Not Supported 00:14:33.506 Controller Attributes 00:14:33.506 128-bit Host Identifier: Supported 00:14:33.506 Non-Operational Permissive Mode: Not Supported 00:14:33.506 NVM Sets: Not Supported 00:14:33.506 Read Recovery Levels: Not Supported 00:14:33.506 Endurance Groups: Not Supported 00:14:33.506 Predictable Latency Mode: Not Supported 00:14:33.506 Traffic Based Keep ALive: Not Supported 00:14:33.506 Namespace Granularity: Not Supported 00:14:33.506 SQ Associations: Not Supported 00:14:33.506 UUID List: Not Supported 00:14:33.506 Multi-Domain Subsystem: Not Supported 00:14:33.506 Fixed Capacity Management: Not Supported 00:14:33.506 Variable Capacity Management: Not Supported 00:14:33.506 Delete Endurance Group: Not Supported 00:14:33.506 Delete NVM Set: Not Supported 00:14:33.506 Extended LBA Formats Supported: Not Supported 00:14:33.506 Flexible Data Placement Supported: Not Supported 00:14:33.506 00:14:33.506 Controller Memory Buffer Support 00:14:33.506 ================================ 00:14:33.506 Supported: No 00:14:33.506 00:14:33.506 Persistent Memory Region Support 00:14:33.506 ================================ 00:14:33.506 Supported: No 00:14:33.506 00:14:33.506 Admin Command Set Attributes 00:14:33.506 ============================ 00:14:33.506 Security Send/Receive: Not Supported 00:14:33.506 Format NVM: Not Supported 00:14:33.506 Firmware Activate/Download: Not Supported 00:14:33.506 Namespace Management: Not Supported 00:14:33.506 Device Self-Test: Not Supported 00:14:33.506 Directives: Not Supported 00:14:33.506 NVMe-MI: Not Supported 00:14:33.506 Virtualization Management: Not Supported 00:14:33.506 Doorbell Buffer Config: Not Supported 00:14:33.506 Get LBA Status Capability: Not Supported 00:14:33.506 Command & Feature Lockdown Capability: Not Supported 00:14:33.506 Abort Command Limit: 4 00:14:33.506 Async Event Request Limit: 4 00:14:33.506 Number of Firmware Slots: N/A 00:14:33.506 Firmware Slot 1 Read-Only: N/A 00:14:33.507 Firmware Activation Without Reset: N/A 00:14:33.507 Multiple Update Detection Support: N/A 00:14:33.507 Firmware Update Granularity: No Information Provided 00:14:33.507 Per-Namespace SMART Log: No 00:14:33.507 Asymmetric Namespace Access Log Page: Not Supported 00:14:33.507 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:33.507 Command Effects Log Page: Supported 00:14:33.507 Get Log Page Extended Data: Supported 00:14:33.507 Telemetry Log Pages: Not Supported 00:14:33.507 Persistent Event Log Pages: Not Supported 00:14:33.507 Supported Log Pages Log Page: May Support 00:14:33.507 Commands Supported & Effects Log Page: Not Supported 00:14:33.507 Feature Identifiers & Effects Log Page:May Support 00:14:33.507 NVMe-MI Commands & Effects Log Page: May Support 00:14:33.507 Data Area 4 for Telemetry Log: Not Supported 00:14:33.507 Error Log Page Entries Supported: 128 00:14:33.507 Keep Alive: Supported 00:14:33.507 Keep Alive Granularity: 10000 ms 00:14:33.507 00:14:33.507 NVM Command Set Attributes 00:14:33.507 ========================== 00:14:33.507 Submission Queue Entry Size 00:14:33.507 Max: 64 00:14:33.507 Min: 64 00:14:33.507 Completion Queue Entry Size 00:14:33.507 Max: 16 00:14:33.507 Min: 16 00:14:33.507 Number of Namespaces: 32 00:14:33.507 Compare Command: Supported 00:14:33.507 Write Uncorrectable Command: Not Supported 00:14:33.507 Dataset Management Command: Supported 00:14:33.507 Write Zeroes Command: Supported 00:14:33.507 Set Features Save Field: Not Supported 00:14:33.507 Reservations: Not Supported 00:14:33.507 Timestamp: Not Supported 00:14:33.507 Copy: Supported 00:14:33.507 Volatile Write Cache: Present 00:14:33.507 Atomic Write Unit (Normal): 1 00:14:33.507 Atomic Write Unit (PFail): 1 00:14:33.507 Atomic Compare & Write Unit: 1 00:14:33.507 Fused Compare & Write: Supported 00:14:33.507 Scatter-Gather List 00:14:33.507 SGL Command Set: Supported (Dword aligned) 00:14:33.507 SGL Keyed: Not Supported 00:14:33.507 SGL Bit Bucket Descriptor: Not Supported 00:14:33.507 SGL Metadata Pointer: Not Supported 00:14:33.507 Oversized SGL: Not Supported 00:14:33.507 SGL Metadata Address: Not Supported 00:14:33.507 SGL Offset: Not Supported 00:14:33.507 Transport SGL Data Block: Not Supported 00:14:33.507 Replay Protected Memory Block: Not Supported 00:14:33.507 00:14:33.507 Firmware Slot Information 00:14:33.507 ========================= 00:14:33.507 Active slot: 1 00:14:33.507 Slot 1 Firmware Revision: 25.01 00:14:33.507 00:14:33.507 00:14:33.507 Commands Supported and Effects 00:14:33.507 ============================== 00:14:33.507 Admin Commands 00:14:33.507 -------------- 00:14:33.507 Get Log Page (02h): Supported 00:14:33.507 Identify (06h): Supported 00:14:33.507 Abort (08h): Supported 00:14:33.507 Set Features (09h): Supported 00:14:33.507 Get Features (0Ah): Supported 00:14:33.507 Asynchronous Event Request (0Ch): Supported 00:14:33.507 Keep Alive (18h): Supported 00:14:33.507 I/O Commands 00:14:33.507 ------------ 00:14:33.507 Flush (00h): Supported LBA-Change 00:14:33.507 Write (01h): Supported LBA-Change 00:14:33.507 Read (02h): Supported 00:14:33.507 Compare (05h): Supported 00:14:33.507 Write Zeroes (08h): Supported LBA-Change 00:14:33.507 Dataset Management (09h): Supported LBA-Change 00:14:33.507 Copy (19h): Supported LBA-Change 00:14:33.507 00:14:33.507 Error Log 00:14:33.507 ========= 00:14:33.507 00:14:33.507 Arbitration 00:14:33.507 =========== 00:14:33.507 Arbitration Burst: 1 00:14:33.507 00:14:33.507 Power Management 00:14:33.507 ================ 00:14:33.507 Number of Power States: 1 00:14:33.507 Current Power State: Power State #0 00:14:33.507 Power State #0: 00:14:33.507 Max Power: 0.00 W 00:14:33.507 Non-Operational State: Operational 00:14:33.507 Entry Latency: Not Reported 00:14:33.507 Exit Latency: Not Reported 00:14:33.507 Relative Read Throughput: 0 00:14:33.507 Relative Read Latency: 0 00:14:33.507 Relative Write Throughput: 0 00:14:33.507 Relative Write Latency: 0 00:14:33.507 Idle Power: Not Reported 00:14:33.507 Active Power: Not Reported 00:14:33.507 Non-Operational Permissive Mode: Not Supported 00:14:33.507 00:14:33.507 Health Information 00:14:33.507 ================== 00:14:33.507 Critical Warnings: 00:14:33.507 Available Spare Space: OK 00:14:33.507 Temperature: OK 00:14:33.507 Device Reliability: OK 00:14:33.507 Read Only: No 00:14:33.507 Volatile Memory Backup: OK 00:14:33.507 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:33.507 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:33.507 Available Spare: 0% 00:14:33.507 Available Sp[2024-10-17 19:21:57.045363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:33.507 [2024-10-17 19:21:57.045370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:33.507 [2024-10-17 19:21:57.045395] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:33.507 [2024-10-17 19:21:57.045403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.507 [2024-10-17 19:21:57.045408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.507 [2024-10-17 19:21:57.045414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.507 [2024-10-17 19:21:57.045419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.507 [2024-10-17 19:21:57.045549] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.507 [2024-10-17 19:21:57.045558] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:33.507 [2024-10-17 19:21:57.046548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.507 [2024-10-17 19:21:57.046595] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:33.507 [2024-10-17 19:21:57.046605] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:33.507 [2024-10-17 19:21:57.047556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:33.507 [2024-10-17 19:21:57.047565] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:33.507 [2024-10-17 19:21:57.047617] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:33.507 [2024-10-17 19:21:57.049610] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.507 are Threshold: 0% 00:14:33.507 Life Percentage Used: 0% 00:14:33.507 Data Units Read: 0 00:14:33.507 Data Units Written: 0 00:14:33.507 Host Read Commands: 0 00:14:33.507 Host Write Commands: 0 00:14:33.507 Controller Busy Time: 0 minutes 00:14:33.507 Power Cycles: 0 00:14:33.507 Power On Hours: 0 hours 00:14:33.507 Unsafe Shutdowns: 0 00:14:33.507 Unrecoverable Media Errors: 0 00:14:33.507 Lifetime Error Log Entries: 0 00:14:33.507 Warning Temperature Time: 0 minutes 00:14:33.507 Critical Temperature Time: 0 minutes 00:14:33.507 00:14:33.507 Number of Queues 00:14:33.507 ================ 00:14:33.507 Number of I/O Submission Queues: 127 00:14:33.507 Number of I/O Completion Queues: 127 00:14:33.507 00:14:33.507 Active Namespaces 00:14:33.507 ================= 00:14:33.507 Namespace ID:1 00:14:33.507 Error Recovery Timeout: Unlimited 00:14:33.507 Command Set Identifier: NVM (00h) 00:14:33.507 Deallocate: Supported 00:14:33.507 Deallocated/Unwritten Error: Not Supported 00:14:33.507 Deallocated Read Value: Unknown 00:14:33.507 Deallocate in Write Zeroes: Not Supported 00:14:33.507 Deallocated Guard Field: 0xFFFF 00:14:33.507 Flush: Supported 00:14:33.507 Reservation: Supported 00:14:33.507 Namespace Sharing Capabilities: Multiple Controllers 00:14:33.507 Size (in LBAs): 131072 (0GiB) 00:14:33.507 Capacity (in LBAs): 131072 (0GiB) 00:14:33.507 Utilization (in LBAs): 131072 (0GiB) 00:14:33.507 NGUID: 32EDB03C0DEA4BADB3148771293CC640 00:14:33.507 UUID: 32edb03c-0dea-4bad-b314-8771293cc640 00:14:33.507 Thin Provisioning: Not Supported 00:14:33.507 Per-NS Atomic Units: Yes 00:14:33.507 Atomic Boundary Size (Normal): 0 00:14:33.507 Atomic Boundary Size (PFail): 0 00:14:33.507 Atomic Boundary Offset: 0 00:14:33.507 Maximum Single Source Range Length: 65535 00:14:33.507 Maximum Copy Length: 65535 00:14:33.507 Maximum Source Range Count: 1 00:14:33.507 NGUID/EUI64 Never Reused: No 00:14:33.507 Namespace Write Protected: No 00:14:33.507 Number of LBA Formats: 1 00:14:33.507 Current LBA Format: LBA Format #00 00:14:33.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:33.507 00:14:33.507 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:33.507 [2024-10-17 19:21:57.278498] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.775 Initializing NVMe Controllers 00:14:38.775 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:38.775 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:38.775 Initialization complete. Launching workers. 00:14:38.775 ======================================================== 00:14:38.776 Latency(us) 00:14:38.776 Device Information : IOPS MiB/s Average min max 00:14:38.776 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39959.86 156.09 3203.03 934.05 8628.65 00:14:38.776 ======================================================== 00:14:38.776 Total : 39959.86 156.09 3203.03 934.05 8628.65 00:14:38.776 00:14:38.776 [2024-10-17 19:22:02.296986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.776 19:22:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:38.776 [2024-10-17 19:22:02.534048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.189 Initializing NVMe Controllers 00:14:44.189 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.189 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.189 Initialization complete. Launching workers. 00:14:44.189 ======================================================== 00:14:44.189 Latency(us) 00:14:44.189 Device Information : IOPS MiB/s Average min max 00:14:44.189 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.75 62.73 7976.59 4971.34 10972.36 00:14:44.189 ======================================================== 00:14:44.189 Total : 16057.75 62.73 7976.59 4971.34 10972.36 00:14:44.189 00:14:44.189 [2024-10-17 19:22:07.574945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.189 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:44.189 [2024-10-17 19:22:07.778899] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.459 [2024-10-17 19:22:12.855967] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.459 Initializing NVMe Controllers 00:14:49.459 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.459 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:49.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:49.459 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:49.459 Initialization complete. Launching workers. 00:14:49.459 Starting thread on core 2 00:14:49.459 Starting thread on core 3 00:14:49.459 Starting thread on core 1 00:14:49.459 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:49.459 [2024-10-17 19:22:13.154968] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.747 [2024-10-17 19:22:16.207368] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.747 Initializing NVMe Controllers 00:14:52.747 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.747 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.747 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:52.747 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:52.747 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:52.747 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:52.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:52.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:52.747 Initialization complete. Launching workers. 00:14:52.747 Starting thread on core 1 with urgent priority queue 00:14:52.747 Starting thread on core 2 with urgent priority queue 00:14:52.747 Starting thread on core 3 with urgent priority queue 00:14:52.747 Starting thread on core 0 with urgent priority queue 00:14:52.747 SPDK bdev Controller (SPDK1 ) core 0: 8697.00 IO/s 11.50 secs/100000 ios 00:14:52.747 SPDK bdev Controller (SPDK1 ) core 1: 9534.67 IO/s 10.49 secs/100000 ios 00:14:52.747 SPDK bdev Controller (SPDK1 ) core 2: 8791.67 IO/s 11.37 secs/100000 ios 00:14:52.747 SPDK bdev Controller (SPDK1 ) core 3: 8163.00 IO/s 12.25 secs/100000 ios 00:14:52.747 ======================================================== 00:14:52.747 00:14:52.747 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:52.747 [2024-10-17 19:22:16.494403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.747 Initializing NVMe Controllers 00:14:52.747 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.747 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.747 Namespace ID: 1 size: 0GB 00:14:52.747 Initialization complete. 00:14:52.747 INFO: using host memory buffer for IO 00:14:52.747 Hello world! 00:14:53.007 [2024-10-17 19:22:16.533662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:53.007 19:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:53.265 [2024-10-17 19:22:16.815971] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.203 Initializing NVMe Controllers 00:14:54.203 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.203 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.203 Initialization complete. Launching workers. 00:14:54.203 submit (in ns) avg, min, max = 6440.7, 3137.1, 4000197.1 00:14:54.203 complete (in ns) avg, min, max = 20224.9, 1709.5, 4993384.8 00:14:54.203 00:14:54.203 Submit histogram 00:14:54.203 ================ 00:14:54.203 Range in us Cumulative Count 00:14:54.203 3.124 - 3.139: 0.0118% ( 2) 00:14:54.203 3.139 - 3.154: 0.0235% ( 2) 00:14:54.203 3.154 - 3.170: 0.0706% ( 8) 00:14:54.203 3.170 - 3.185: 0.1590% ( 15) 00:14:54.203 3.185 - 3.200: 0.2473% ( 15) 00:14:54.203 3.200 - 3.215: 0.6711% ( 72) 00:14:54.203 3.215 - 3.230: 2.4196% ( 297) 00:14:54.203 3.230 - 3.246: 6.8468% ( 752) 00:14:54.203 3.246 - 3.261: 11.9687% ( 870) 00:14:54.203 3.261 - 3.276: 18.5388% ( 1116) 00:14:54.203 3.276 - 3.291: 25.0324% ( 1103) 00:14:54.203 3.291 - 3.307: 31.7791% ( 1146) 00:14:54.203 3.307 - 3.322: 37.8783% ( 1036) 00:14:54.203 3.322 - 3.337: 43.6006% ( 972) 00:14:54.203 3.337 - 3.352: 49.2700% ( 963) 00:14:54.203 3.352 - 3.368: 54.5155% ( 891) 00:14:54.203 3.368 - 3.383: 61.3623% ( 1163) 00:14:54.203 3.383 - 3.398: 68.8155% ( 1266) 00:14:54.203 3.398 - 3.413: 74.1199% ( 901) 00:14:54.203 3.413 - 3.429: 79.2241% ( 867) 00:14:54.203 3.429 - 3.444: 82.8447% ( 615) 00:14:54.203 3.444 - 3.459: 85.4233% ( 438) 00:14:54.203 3.459 - 3.474: 86.8833% ( 248) 00:14:54.203 3.474 - 3.490: 87.6604% ( 132) 00:14:54.203 3.490 - 3.505: 88.0608% ( 68) 00:14:54.203 3.505 - 3.520: 88.4375% ( 64) 00:14:54.203 3.520 - 3.535: 89.0204% ( 99) 00:14:54.203 3.535 - 3.550: 89.7445% ( 123) 00:14:54.203 3.550 - 3.566: 90.6982% ( 162) 00:14:54.203 3.566 - 3.581: 91.6578% ( 163) 00:14:54.203 3.581 - 3.596: 92.6174% ( 163) 00:14:54.203 3.596 - 3.611: 93.5359% ( 156) 00:14:54.203 3.611 - 3.627: 94.4543% ( 156) 00:14:54.203 3.627 - 3.642: 95.3962% ( 160) 00:14:54.203 3.642 - 3.657: 96.3970% ( 170) 00:14:54.203 3.657 - 3.672: 97.1918% ( 135) 00:14:54.203 3.672 - 3.688: 97.7511% ( 95) 00:14:54.203 3.688 - 3.703: 98.2515% ( 85) 00:14:54.203 3.703 - 3.718: 98.6636% ( 70) 00:14:54.203 3.718 - 3.733: 98.9815% ( 54) 00:14:54.203 3.733 - 3.749: 99.2052% ( 38) 00:14:54.203 3.749 - 3.764: 99.3583% ( 26) 00:14:54.203 3.764 - 3.779: 99.4760% ( 20) 00:14:54.203 3.779 - 3.794: 99.5643% ( 15) 00:14:54.203 3.794 - 3.810: 99.5997% ( 6) 00:14:54.203 3.810 - 3.825: 99.6291% ( 5) 00:14:54.203 3.825 - 3.840: 99.6350% ( 1) 00:14:54.203 3.840 - 3.855: 99.6409% ( 1) 00:14:54.203 3.870 - 3.886: 99.6468% ( 1) 00:14:54.203 5.272 - 5.303: 99.6527% ( 1) 00:14:54.203 5.486 - 5.516: 99.6644% ( 2) 00:14:54.203 5.547 - 5.577: 99.6762% ( 2) 00:14:54.203 5.730 - 5.760: 99.6821% ( 1) 00:14:54.203 5.790 - 5.821: 99.6880% ( 1) 00:14:54.203 5.821 - 5.851: 99.6939% ( 1) 00:14:54.203 6.065 - 6.095: 99.6998% ( 1) 00:14:54.203 6.095 - 6.126: 99.7056% ( 1) 00:14:54.203 6.156 - 6.187: 99.7115% ( 1) 00:14:54.203 6.187 - 6.217: 99.7174% ( 1) 00:14:54.203 6.217 - 6.248: 99.7233% ( 1) 00:14:54.203 6.430 - 6.461: 99.7292% ( 1) 00:14:54.203 6.461 - 6.491: 99.7351% ( 1) 00:14:54.203 6.613 - 6.644: 99.7410% ( 1) 00:14:54.203 6.644 - 6.674: 99.7469% ( 1) 00:14:54.203 6.735 - 6.766: 99.7527% ( 1) 00:14:54.203 6.796 - 6.827: 99.7645% ( 2) 00:14:54.203 6.918 - 6.949: 99.7763% ( 2) 00:14:54.203 6.949 - 6.979: 99.7822% ( 1) 00:14:54.203 6.979 - 7.010: 99.7939% ( 2) 00:14:54.203 7.040 - 7.070: 99.7998% ( 1) 00:14:54.203 7.101 - 7.131: 99.8057% ( 1) 00:14:54.203 7.192 - 7.223: 99.8116% ( 1) 00:14:54.203 7.223 - 7.253: 99.8175% ( 1) 00:14:54.203 7.284 - 7.314: 99.8234% ( 1) 00:14:54.203 7.314 - 7.345: 99.8293% ( 1) 00:14:54.203 7.497 - 7.528: 99.8352% ( 1) 00:14:54.203 7.589 - 7.619: 99.8410% ( 1) 00:14:54.203 7.741 - 7.771: 99.8469% ( 1) 00:14:54.203 7.771 - 7.802: 99.8528% ( 1) 00:14:54.203 [2024-10-17 19:22:17.837845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.203 7.924 - 7.985: 99.8646% ( 2) 00:14:54.203 8.107 - 8.168: 99.8705% ( 1) 00:14:54.203 8.411 - 8.472: 99.8764% ( 1) 00:14:54.203 8.533 - 8.594: 99.8823% ( 1) 00:14:54.203 8.594 - 8.655: 99.8881% ( 1) 00:14:54.203 8.716 - 8.777: 99.8940% ( 1) 00:14:54.203 8.838 - 8.899: 99.8999% ( 1) 00:14:54.203 10.179 - 10.240: 99.9058% ( 1) 00:14:54.203 13.470 - 13.531: 99.9117% ( 1) 00:14:54.203 13.714 - 13.775: 99.9176% ( 1) 00:14:54.203 15.604 - 15.726: 99.9235% ( 1) 00:14:54.203 3994.575 - 4025.783: 100.0000% ( 13) 00:14:54.203 00:14:54.203 Complete histogram 00:14:54.203 ================== 00:14:54.203 Range in us Cumulative Count 00:14:54.203 1.707 - 1.714: 0.0118% ( 2) 00:14:54.203 1.714 - 1.722: 0.0412% ( 5) 00:14:54.203 1.722 - 1.730: 0.0648% ( 4) 00:14:54.203 1.730 - 1.737: 0.0706% ( 1) 00:14:54.203 1.745 - 1.752: 0.1413% ( 12) 00:14:54.203 1.752 - 1.760: 0.8065% ( 113) 00:14:54.203 1.760 - 1.768: 3.0437% ( 380) 00:14:54.203 1.768 - 1.775: 5.3691% ( 395) 00:14:54.203 1.775 - 1.783: 6.6349% ( 215) 00:14:54.203 1.783 - 1.790: 7.5238% ( 151) 00:14:54.203 1.790 - 1.798: 8.4658% ( 160) 00:14:54.203 1.798 - 1.806: 13.5465% ( 863) 00:14:54.203 1.806 - 1.813: 35.1230% ( 3665) 00:14:54.203 1.813 - 1.821: 67.0964% ( 5431) 00:14:54.203 1.821 - 1.829: 85.0288% ( 3046) 00:14:54.203 1.829 - 1.836: 91.3753% ( 1078) 00:14:54.203 1.836 - 1.844: 94.2776% ( 493) 00:14:54.203 1.844 - 1.851: 96.2086% ( 328) 00:14:54.203 1.851 - 1.859: 97.0270% ( 139) 00:14:54.203 1.859 - 1.867: 97.3449% ( 54) 00:14:54.203 1.867 - 1.874: 97.6392% ( 50) 00:14:54.203 1.874 - 1.882: 97.9689% ( 56) 00:14:54.203 1.882 - 1.890: 98.4929% ( 89) 00:14:54.203 1.890 - 1.897: 98.9344% ( 75) 00:14:54.203 1.897 - 1.905: 99.1464% ( 36) 00:14:54.203 1.905 - 1.912: 99.2464% ( 17) 00:14:54.203 1.912 - 1.920: 99.2759% ( 5) 00:14:54.203 1.920 - 1.928: 99.3171% ( 7) 00:14:54.203 1.928 - 1.935: 99.3524% ( 6) 00:14:54.203 1.943 - 1.950: 99.3583% ( 1) 00:14:54.203 1.966 - 1.981: 99.3642% ( 1) 00:14:54.203 2.027 - 2.042: 99.3701% ( 1) 00:14:54.203 2.164 - 2.179: 99.3760% ( 1) 00:14:54.203 3.474 - 3.490: 99.3818% ( 1) 00:14:54.203 3.779 - 3.794: 99.3877% ( 1) 00:14:54.203 3.855 - 3.870: 99.3936% ( 1) 00:14:54.203 4.023 - 4.053: 99.3995% ( 1) 00:14:54.203 4.053 - 4.084: 99.4054% ( 1) 00:14:54.203 4.145 - 4.175: 99.4113% ( 1) 00:14:54.203 4.450 - 4.480: 99.4172% ( 1) 00:14:54.203 4.480 - 4.510: 99.4231% ( 1) 00:14:54.203 4.785 - 4.815: 99.4289% ( 1) 00:14:54.203 5.059 - 5.090: 99.4348% ( 1) 00:14:54.203 5.242 - 5.272: 99.4466% ( 2) 00:14:54.203 5.303 - 5.333: 99.4525% ( 1) 00:14:54.203 5.425 - 5.455: 99.4584% ( 1) 00:14:54.203 5.577 - 5.608: 99.4702% ( 2) 00:14:54.203 5.669 - 5.699: 99.4760% ( 1) 00:14:54.203 5.730 - 5.760: 99.4819% ( 1) 00:14:54.203 6.034 - 6.065: 99.4878% ( 1) 00:14:54.203 6.187 - 6.217: 99.4996% ( 2) 00:14:54.203 6.217 - 6.248: 99.5055% ( 1) 00:14:54.204 6.552 - 6.583: 99.5114% ( 1) 00:14:54.204 6.857 - 6.888: 99.5172% ( 1) 00:14:54.204 6.949 - 6.979: 99.5231% ( 1) 00:14:54.204 7.070 - 7.101: 99.5290% ( 1) 00:14:54.204 10.484 - 10.545: 99.5349% ( 1) 00:14:54.204 11.276 - 11.337: 99.5408% ( 1) 00:14:54.204 3994.575 - 4025.783: 99.9941% ( 77) 00:14:54.204 4993.219 - 5024.427: 100.0000% ( 1) 00:14:54.204 00:14:54.204 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:54.204 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:54.204 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:54.204 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:54.204 19:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.463 [ 00:14:54.463 { 00:14:54.463 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.463 "subtype": "Discovery", 00:14:54.463 "listen_addresses": [], 00:14:54.463 "allow_any_host": true, 00:14:54.463 "hosts": [] 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.463 "subtype": "NVMe", 00:14:54.463 "listen_addresses": [ 00:14:54.463 { 00:14:54.463 "trtype": "VFIOUSER", 00:14:54.463 "adrfam": "IPv4", 00:14:54.463 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.463 "trsvcid": "0" 00:14:54.463 } 00:14:54.463 ], 00:14:54.463 "allow_any_host": true, 00:14:54.463 "hosts": [], 00:14:54.463 "serial_number": "SPDK1", 00:14:54.463 "model_number": "SPDK bdev Controller", 00:14:54.463 "max_namespaces": 32, 00:14:54.463 "min_cntlid": 1, 00:14:54.463 "max_cntlid": 65519, 00:14:54.463 "namespaces": [ 00:14:54.463 { 00:14:54.463 "nsid": 1, 00:14:54.463 "bdev_name": "Malloc1", 00:14:54.463 "name": "Malloc1", 00:14:54.463 "nguid": "32EDB03C0DEA4BADB3148771293CC640", 00:14:54.463 "uuid": "32edb03c-0dea-4bad-b314-8771293cc640" 00:14:54.463 } 00:14:54.463 ] 00:14:54.463 }, 00:14:54.463 { 00:14:54.463 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.463 "subtype": "NVMe", 00:14:54.463 "listen_addresses": [ 00:14:54.463 { 00:14:54.463 "trtype": "VFIOUSER", 00:14:54.463 "adrfam": "IPv4", 00:14:54.463 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.463 "trsvcid": "0" 00:14:54.463 } 00:14:54.463 ], 00:14:54.463 "allow_any_host": true, 00:14:54.463 "hosts": [], 00:14:54.463 "serial_number": "SPDK2", 00:14:54.463 "model_number": "SPDK bdev Controller", 00:14:54.463 "max_namespaces": 32, 00:14:54.463 "min_cntlid": 1, 00:14:54.463 "max_cntlid": 65519, 00:14:54.463 "namespaces": [ 00:14:54.463 { 00:14:54.463 "nsid": 1, 00:14:54.463 "bdev_name": "Malloc2", 00:14:54.463 "name": "Malloc2", 00:14:54.463 "nguid": "86DF3B35181341709AC1BA9B616BF053", 00:14:54.463 "uuid": "86df3b35-1813-4170-9ac1-ba9b616bf053" 00:14:54.463 } 00:14:54.463 ] 00:14:54.463 } 00:14:54.463 ] 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2067718 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:54.463 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:54.722 [2024-10-17 19:22:18.250004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.722 Malloc3 00:14:54.722 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:54.722 [2024-10-17 19:22:18.478676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.722 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.981 Asynchronous Event Request test 00:14:54.981 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.981 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.981 Registering asynchronous event callbacks... 00:14:54.981 Starting namespace attribute notice tests for all controllers... 00:14:54.981 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:54.981 aer_cb - Changed Namespace 00:14:54.981 Cleaning up... 00:14:54.981 [ 00:14:54.981 { 00:14:54.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.981 "subtype": "Discovery", 00:14:54.981 "listen_addresses": [], 00:14:54.981 "allow_any_host": true, 00:14:54.981 "hosts": [] 00:14:54.981 }, 00:14:54.981 { 00:14:54.981 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.981 "subtype": "NVMe", 00:14:54.981 "listen_addresses": [ 00:14:54.981 { 00:14:54.981 "trtype": "VFIOUSER", 00:14:54.981 "adrfam": "IPv4", 00:14:54.981 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.981 "trsvcid": "0" 00:14:54.981 } 00:14:54.981 ], 00:14:54.981 "allow_any_host": true, 00:14:54.981 "hosts": [], 00:14:54.981 "serial_number": "SPDK1", 00:14:54.981 "model_number": "SPDK bdev Controller", 00:14:54.981 "max_namespaces": 32, 00:14:54.981 "min_cntlid": 1, 00:14:54.982 "max_cntlid": 65519, 00:14:54.982 "namespaces": [ 00:14:54.982 { 00:14:54.982 "nsid": 1, 00:14:54.982 "bdev_name": "Malloc1", 00:14:54.982 "name": "Malloc1", 00:14:54.982 "nguid": "32EDB03C0DEA4BADB3148771293CC640", 00:14:54.982 "uuid": "32edb03c-0dea-4bad-b314-8771293cc640" 00:14:54.982 }, 00:14:54.982 { 00:14:54.982 "nsid": 2, 00:14:54.982 "bdev_name": "Malloc3", 00:14:54.982 "name": "Malloc3", 00:14:54.982 "nguid": "6D29C009ADD640C992D00C3E952976D0", 00:14:54.982 "uuid": "6d29c009-add6-40c9-92d0-0c3e952976d0" 00:14:54.982 } 00:14:54.982 ] 00:14:54.982 }, 00:14:54.982 { 00:14:54.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.982 "subtype": "NVMe", 00:14:54.982 "listen_addresses": [ 00:14:54.982 { 00:14:54.982 "trtype": "VFIOUSER", 00:14:54.982 "adrfam": "IPv4", 00:14:54.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.982 "trsvcid": "0" 00:14:54.982 } 00:14:54.982 ], 00:14:54.982 "allow_any_host": true, 00:14:54.982 "hosts": [], 00:14:54.982 "serial_number": "SPDK2", 00:14:54.982 "model_number": "SPDK bdev Controller", 00:14:54.982 "max_namespaces": 32, 00:14:54.982 "min_cntlid": 1, 00:14:54.982 "max_cntlid": 65519, 00:14:54.982 "namespaces": [ 00:14:54.982 { 00:14:54.982 "nsid": 1, 00:14:54.982 "bdev_name": "Malloc2", 00:14:54.982 "name": "Malloc2", 00:14:54.982 "nguid": "86DF3B35181341709AC1BA9B616BF053", 00:14:54.982 "uuid": "86df3b35-1813-4170-9ac1-ba9b616bf053" 00:14:54.982 } 00:14:54.982 ] 00:14:54.982 } 00:14:54.982 ] 00:14:54.982 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2067718 00:14:54.982 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:54.982 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:54.982 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:54.982 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:54.982 [2024-10-17 19:22:18.731818] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:14:54.982 [2024-10-17 19:22:18.731867] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067738 ] 00:14:55.244 [2024-10-17 19:22:18.769562] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:55.244 [2024-10-17 19:22:18.783847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.244 [2024-10-17 19:22:18.783871] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f26a4d56000 00:14:55.244 [2024-10-17 19:22:18.784851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.785855] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.786861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.787868] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.788874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.789884] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.790888] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.791901] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.244 [2024-10-17 19:22:18.792912] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.244 [2024-10-17 19:22:18.792925] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f26a4d4b000 00:14:55.244 [2024-10-17 19:22:18.793838] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.244 [2024-10-17 19:22:18.806867] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:55.244 [2024-10-17 19:22:18.806889] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:55.244 [2024-10-17 19:22:18.811974] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:55.244 [2024-10-17 19:22:18.812014] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:55.244 [2024-10-17 19:22:18.812084] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:55.244 [2024-10-17 19:22:18.812098] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:55.244 [2024-10-17 19:22:18.812103] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:55.244 [2024-10-17 19:22:18.812984] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:55.244 [2024-10-17 19:22:18.812994] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:55.244 [2024-10-17 19:22:18.813000] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:55.244 [2024-10-17 19:22:18.813990] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:55.244 [2024-10-17 19:22:18.813999] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:55.244 [2024-10-17 19:22:18.814005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.814992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:55.244 [2024-10-17 19:22:18.815003] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.816001] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:55.244 [2024-10-17 19:22:18.816009] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:55.244 [2024-10-17 19:22:18.816014] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.816019] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.816124] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:55.244 [2024-10-17 19:22:18.816128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.816133] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:55.244 [2024-10-17 19:22:18.817004] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:55.244 [2024-10-17 19:22:18.818014] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:55.244 [2024-10-17 19:22:18.819020] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:55.244 [2024-10-17 19:22:18.820024] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.244 [2024-10-17 19:22:18.820064] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:55.244 [2024-10-17 19:22:18.821029] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:55.244 [2024-10-17 19:22:18.821038] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:55.244 [2024-10-17 19:22:18.821042] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.821059] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:55.244 [2024-10-17 19:22:18.821065] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.821078] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.244 [2024-10-17 19:22:18.821082] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.244 [2024-10-17 19:22:18.821086] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.244 [2024-10-17 19:22:18.821096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.244 [2024-10-17 19:22:18.828608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:55.244 [2024-10-17 19:22:18.828619] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:55.244 [2024-10-17 19:22:18.828624] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:55.244 [2024-10-17 19:22:18.828630] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:55.244 [2024-10-17 19:22:18.828634] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:55.244 [2024-10-17 19:22:18.828638] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:55.244 [2024-10-17 19:22:18.828642] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:55.244 [2024-10-17 19:22:18.828646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.828653] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.828662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:55.244 [2024-10-17 19:22:18.836609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:55.244 [2024-10-17 19:22:18.836621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.244 [2024-10-17 19:22:18.836628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.244 [2024-10-17 19:22:18.836635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.244 [2024-10-17 19:22:18.836642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.244 [2024-10-17 19:22:18.836646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.836655] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.836663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:55.244 [2024-10-17 19:22:18.844607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:55.244 [2024-10-17 19:22:18.844616] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:55.244 [2024-10-17 19:22:18.844621] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.844627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.844636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:55.244 [2024-10-17 19:22:18.844644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.244 [2024-10-17 19:22:18.852605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.852665] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.852673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.852680] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:55.245 [2024-10-17 19:22:18.852687] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:55.245 [2024-10-17 19:22:18.852690] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.852697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.860605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.860615] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:55.245 [2024-10-17 19:22:18.860627] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.860634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.860640] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.245 [2024-10-17 19:22:18.860644] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.245 [2024-10-17 19:22:18.860647] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.860652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.868605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.868618] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.868625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.868632] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.245 [2024-10-17 19:22:18.868636] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.245 [2024-10-17 19:22:18.868639] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.868645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.876606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.876615] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876622] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876630] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876636] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876640] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876649] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:55.245 [2024-10-17 19:22:18.876654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:55.245 [2024-10-17 19:22:18.876659] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:55.245 [2024-10-17 19:22:18.876674] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.884606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.884619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.892608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.892620] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.900605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.900617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.907639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.907655] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:55.245 [2024-10-17 19:22:18.907660] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:55.245 [2024-10-17 19:22:18.907663] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:55.245 [2024-10-17 19:22:18.907666] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:55.245 [2024-10-17 19:22:18.907669] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:55.245 [2024-10-17 19:22:18.907675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:55.245 [2024-10-17 19:22:18.907681] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:55.245 [2024-10-17 19:22:18.907685] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:55.245 [2024-10-17 19:22:18.907688] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.907694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.907700] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:55.245 [2024-10-17 19:22:18.907704] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.245 [2024-10-17 19:22:18.907707] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.907712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.907719] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:55.245 [2024-10-17 19:22:18.907723] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:55.245 [2024-10-17 19:22:18.907725] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.245 [2024-10-17 19:22:18.907731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:55.245 [2024-10-17 19:22:18.916607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.916621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.916631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:55.245 [2024-10-17 19:22:18.916637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:55.245 ===================================================== 00:14:55.245 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.245 ===================================================== 00:14:55.245 Controller Capabilities/Features 00:14:55.245 ================================ 00:14:55.245 Vendor ID: 4e58 00:14:55.245 Subsystem Vendor ID: 4e58 00:14:55.245 Serial Number: SPDK2 00:14:55.245 Model Number: SPDK bdev Controller 00:14:55.245 Firmware Version: 25.01 00:14:55.245 Recommended Arb Burst: 6 00:14:55.245 IEEE OUI Identifier: 8d 6b 50 00:14:55.245 Multi-path I/O 00:14:55.245 May have multiple subsystem ports: Yes 00:14:55.245 May have multiple controllers: Yes 00:14:55.245 Associated with SR-IOV VF: No 00:14:55.245 Max Data Transfer Size: 131072 00:14:55.245 Max Number of Namespaces: 32 00:14:55.245 Max Number of I/O Queues: 127 00:14:55.245 NVMe Specification Version (VS): 1.3 00:14:55.245 NVMe Specification Version (Identify): 1.3 00:14:55.245 Maximum Queue Entries: 256 00:14:55.245 Contiguous Queues Required: Yes 00:14:55.245 Arbitration Mechanisms Supported 00:14:55.245 Weighted Round Robin: Not Supported 00:14:55.245 Vendor Specific: Not Supported 00:14:55.245 Reset Timeout: 15000 ms 00:14:55.245 Doorbell Stride: 4 bytes 00:14:55.245 NVM Subsystem Reset: Not Supported 00:14:55.245 Command Sets Supported 00:14:55.245 NVM Command Set: Supported 00:14:55.245 Boot Partition: Not Supported 00:14:55.245 Memory Page Size Minimum: 4096 bytes 00:14:55.245 Memory Page Size Maximum: 4096 bytes 00:14:55.245 Persistent Memory Region: Not Supported 00:14:55.245 Optional Asynchronous Events Supported 00:14:55.245 Namespace Attribute Notices: Supported 00:14:55.245 Firmware Activation Notices: Not Supported 00:14:55.245 ANA Change Notices: Not Supported 00:14:55.245 PLE Aggregate Log Change Notices: Not Supported 00:14:55.245 LBA Status Info Alert Notices: Not Supported 00:14:55.245 EGE Aggregate Log Change Notices: Not Supported 00:14:55.245 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.245 Zone Descriptor Change Notices: Not Supported 00:14:55.245 Discovery Log Change Notices: Not Supported 00:14:55.245 Controller Attributes 00:14:55.245 128-bit Host Identifier: Supported 00:14:55.245 Non-Operational Permissive Mode: Not Supported 00:14:55.245 NVM Sets: Not Supported 00:14:55.245 Read Recovery Levels: Not Supported 00:14:55.245 Endurance Groups: Not Supported 00:14:55.245 Predictable Latency Mode: Not Supported 00:14:55.245 Traffic Based Keep ALive: Not Supported 00:14:55.245 Namespace Granularity: Not Supported 00:14:55.245 SQ Associations: Not Supported 00:14:55.245 UUID List: Not Supported 00:14:55.245 Multi-Domain Subsystem: Not Supported 00:14:55.245 Fixed Capacity Management: Not Supported 00:14:55.245 Variable Capacity Management: Not Supported 00:14:55.245 Delete Endurance Group: Not Supported 00:14:55.246 Delete NVM Set: Not Supported 00:14:55.246 Extended LBA Formats Supported: Not Supported 00:14:55.246 Flexible Data Placement Supported: Not Supported 00:14:55.246 00:14:55.246 Controller Memory Buffer Support 00:14:55.246 ================================ 00:14:55.246 Supported: No 00:14:55.246 00:14:55.246 Persistent Memory Region Support 00:14:55.246 ================================ 00:14:55.246 Supported: No 00:14:55.246 00:14:55.246 Admin Command Set Attributes 00:14:55.246 ============================ 00:14:55.246 Security Send/Receive: Not Supported 00:14:55.246 Format NVM: Not Supported 00:14:55.246 Firmware Activate/Download: Not Supported 00:14:55.246 Namespace Management: Not Supported 00:14:55.246 Device Self-Test: Not Supported 00:14:55.246 Directives: Not Supported 00:14:55.246 NVMe-MI: Not Supported 00:14:55.246 Virtualization Management: Not Supported 00:14:55.246 Doorbell Buffer Config: Not Supported 00:14:55.246 Get LBA Status Capability: Not Supported 00:14:55.246 Command & Feature Lockdown Capability: Not Supported 00:14:55.246 Abort Command Limit: 4 00:14:55.246 Async Event Request Limit: 4 00:14:55.246 Number of Firmware Slots: N/A 00:14:55.246 Firmware Slot 1 Read-Only: N/A 00:14:55.246 Firmware Activation Without Reset: N/A 00:14:55.246 Multiple Update Detection Support: N/A 00:14:55.246 Firmware Update Granularity: No Information Provided 00:14:55.246 Per-Namespace SMART Log: No 00:14:55.246 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.246 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:55.246 Command Effects Log Page: Supported 00:14:55.246 Get Log Page Extended Data: Supported 00:14:55.246 Telemetry Log Pages: Not Supported 00:14:55.246 Persistent Event Log Pages: Not Supported 00:14:55.246 Supported Log Pages Log Page: May Support 00:14:55.246 Commands Supported & Effects Log Page: Not Supported 00:14:55.246 Feature Identifiers & Effects Log Page:May Support 00:14:55.246 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.246 Data Area 4 for Telemetry Log: Not Supported 00:14:55.246 Error Log Page Entries Supported: 128 00:14:55.246 Keep Alive: Supported 00:14:55.246 Keep Alive Granularity: 10000 ms 00:14:55.246 00:14:55.246 NVM Command Set Attributes 00:14:55.246 ========================== 00:14:55.246 Submission Queue Entry Size 00:14:55.246 Max: 64 00:14:55.246 Min: 64 00:14:55.246 Completion Queue Entry Size 00:14:55.246 Max: 16 00:14:55.246 Min: 16 00:14:55.246 Number of Namespaces: 32 00:14:55.246 Compare Command: Supported 00:14:55.246 Write Uncorrectable Command: Not Supported 00:14:55.246 Dataset Management Command: Supported 00:14:55.246 Write Zeroes Command: Supported 00:14:55.246 Set Features Save Field: Not Supported 00:14:55.246 Reservations: Not Supported 00:14:55.246 Timestamp: Not Supported 00:14:55.246 Copy: Supported 00:14:55.246 Volatile Write Cache: Present 00:14:55.246 Atomic Write Unit (Normal): 1 00:14:55.246 Atomic Write Unit (PFail): 1 00:14:55.246 Atomic Compare & Write Unit: 1 00:14:55.246 Fused Compare & Write: Supported 00:14:55.246 Scatter-Gather List 00:14:55.246 SGL Command Set: Supported (Dword aligned) 00:14:55.246 SGL Keyed: Not Supported 00:14:55.246 SGL Bit Bucket Descriptor: Not Supported 00:14:55.246 SGL Metadata Pointer: Not Supported 00:14:55.246 Oversized SGL: Not Supported 00:14:55.246 SGL Metadata Address: Not Supported 00:14:55.246 SGL Offset: Not Supported 00:14:55.246 Transport SGL Data Block: Not Supported 00:14:55.246 Replay Protected Memory Block: Not Supported 00:14:55.246 00:14:55.246 Firmware Slot Information 00:14:55.246 ========================= 00:14:55.246 Active slot: 1 00:14:55.246 Slot 1 Firmware Revision: 25.01 00:14:55.246 00:14:55.246 00:14:55.246 Commands Supported and Effects 00:14:55.246 ============================== 00:14:55.246 Admin Commands 00:14:55.246 -------------- 00:14:55.246 Get Log Page (02h): Supported 00:14:55.246 Identify (06h): Supported 00:14:55.246 Abort (08h): Supported 00:14:55.246 Set Features (09h): Supported 00:14:55.246 Get Features (0Ah): Supported 00:14:55.246 Asynchronous Event Request (0Ch): Supported 00:14:55.246 Keep Alive (18h): Supported 00:14:55.246 I/O Commands 00:14:55.246 ------------ 00:14:55.246 Flush (00h): Supported LBA-Change 00:14:55.246 Write (01h): Supported LBA-Change 00:14:55.246 Read (02h): Supported 00:14:55.246 Compare (05h): Supported 00:14:55.246 Write Zeroes (08h): Supported LBA-Change 00:14:55.246 Dataset Management (09h): Supported LBA-Change 00:14:55.246 Copy (19h): Supported LBA-Change 00:14:55.246 00:14:55.246 Error Log 00:14:55.246 ========= 00:14:55.246 00:14:55.246 Arbitration 00:14:55.246 =========== 00:14:55.246 Arbitration Burst: 1 00:14:55.246 00:14:55.246 Power Management 00:14:55.246 ================ 00:14:55.246 Number of Power States: 1 00:14:55.246 Current Power State: Power State #0 00:14:55.246 Power State #0: 00:14:55.246 Max Power: 0.00 W 00:14:55.246 Non-Operational State: Operational 00:14:55.246 Entry Latency: Not Reported 00:14:55.246 Exit Latency: Not Reported 00:14:55.246 Relative Read Throughput: 0 00:14:55.246 Relative Read Latency: 0 00:14:55.246 Relative Write Throughput: 0 00:14:55.246 Relative Write Latency: 0 00:14:55.246 Idle Power: Not Reported 00:14:55.246 Active Power: Not Reported 00:14:55.246 Non-Operational Permissive Mode: Not Supported 00:14:55.246 00:14:55.246 Health Information 00:14:55.246 ================== 00:14:55.246 Critical Warnings: 00:14:55.246 Available Spare Space: OK 00:14:55.246 Temperature: OK 00:14:55.246 Device Reliability: OK 00:14:55.246 Read Only: No 00:14:55.246 Volatile Memory Backup: OK 00:14:55.246 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:55.246 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:55.246 Available Spare: 0% 00:14:55.246 Available Sp[2024-10-17 19:22:18.916721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:55.246 [2024-10-17 19:22:18.924609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:55.246 [2024-10-17 19:22:18.924638] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:55.246 [2024-10-17 19:22:18.924646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.246 [2024-10-17 19:22:18.924652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.246 [2024-10-17 19:22:18.924657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.246 [2024-10-17 19:22:18.924662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.246 [2024-10-17 19:22:18.924711] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:55.246 [2024-10-17 19:22:18.924722] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:55.246 [2024-10-17 19:22:18.925720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.246 [2024-10-17 19:22:18.925761] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:55.246 [2024-10-17 19:22:18.925768] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:55.246 [2024-10-17 19:22:18.926720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:55.246 [2024-10-17 19:22:18.926731] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:55.246 [2024-10-17 19:22:18.926780] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:55.246 [2024-10-17 19:22:18.927742] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.246 are Threshold: 0% 00:14:55.246 Life Percentage Used: 0% 00:14:55.246 Data Units Read: 0 00:14:55.246 Data Units Written: 0 00:14:55.246 Host Read Commands: 0 00:14:55.246 Host Write Commands: 0 00:14:55.246 Controller Busy Time: 0 minutes 00:14:55.246 Power Cycles: 0 00:14:55.246 Power On Hours: 0 hours 00:14:55.246 Unsafe Shutdowns: 0 00:14:55.246 Unrecoverable Media Errors: 0 00:14:55.246 Lifetime Error Log Entries: 0 00:14:55.246 Warning Temperature Time: 0 minutes 00:14:55.246 Critical Temperature Time: 0 minutes 00:14:55.246 00:14:55.246 Number of Queues 00:14:55.246 ================ 00:14:55.246 Number of I/O Submission Queues: 127 00:14:55.246 Number of I/O Completion Queues: 127 00:14:55.246 00:14:55.246 Active Namespaces 00:14:55.246 ================= 00:14:55.246 Namespace ID:1 00:14:55.246 Error Recovery Timeout: Unlimited 00:14:55.246 Command Set Identifier: NVM (00h) 00:14:55.246 Deallocate: Supported 00:14:55.246 Deallocated/Unwritten Error: Not Supported 00:14:55.246 Deallocated Read Value: Unknown 00:14:55.246 Deallocate in Write Zeroes: Not Supported 00:14:55.246 Deallocated Guard Field: 0xFFFF 00:14:55.246 Flush: Supported 00:14:55.246 Reservation: Supported 00:14:55.246 Namespace Sharing Capabilities: Multiple Controllers 00:14:55.246 Size (in LBAs): 131072 (0GiB) 00:14:55.246 Capacity (in LBAs): 131072 (0GiB) 00:14:55.246 Utilization (in LBAs): 131072 (0GiB) 00:14:55.246 NGUID: 86DF3B35181341709AC1BA9B616BF053 00:14:55.246 UUID: 86df3b35-1813-4170-9ac1-ba9b616bf053 00:14:55.247 Thin Provisioning: Not Supported 00:14:55.247 Per-NS Atomic Units: Yes 00:14:55.247 Atomic Boundary Size (Normal): 0 00:14:55.247 Atomic Boundary Size (PFail): 0 00:14:55.247 Atomic Boundary Offset: 0 00:14:55.247 Maximum Single Source Range Length: 65535 00:14:55.247 Maximum Copy Length: 65535 00:14:55.247 Maximum Source Range Count: 1 00:14:55.247 NGUID/EUI64 Never Reused: No 00:14:55.247 Namespace Write Protected: No 00:14:55.247 Number of LBA Formats: 1 00:14:55.247 Current LBA Format: LBA Format #00 00:14:55.247 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:55.247 00:14:55.247 19:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:55.506 [2024-10-17 19:22:19.156991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:00.776 Initializing NVMe Controllers 00:15:00.776 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:00.776 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:00.776 Initialization complete. Launching workers. 00:15:00.776 ======================================================== 00:15:00.776 Latency(us) 00:15:00.776 Device Information : IOPS MiB/s Average min max 00:15:00.776 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39961.34 156.10 3202.92 937.33 7629.58 00:15:00.776 ======================================================== 00:15:00.776 Total : 39961.34 156.10 3202.92 937.33 7629.58 00:15:00.776 00:15:00.776 [2024-10-17 19:22:24.262858] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:00.776 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:00.776 [2024-10-17 19:22:24.501580] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.078 Initializing NVMe Controllers 00:15:06.078 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.078 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:06.078 Initialization complete. Launching workers. 00:15:06.078 ======================================================== 00:15:06.078 Latency(us) 00:15:06.078 Device Information : IOPS MiB/s Average min max 00:15:06.078 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39948.14 156.05 3203.98 943.70 8610.01 00:15:06.078 ======================================================== 00:15:06.078 Total : 39948.14 156.05 3203.98 943.70 8610.01 00:15:06.078 00:15:06.078 [2024-10-17 19:22:29.521163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.078 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:06.078 [2024-10-17 19:22:29.722957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.348 [2024-10-17 19:22:34.862703] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.348 Initializing NVMe Controllers 00:15:11.348 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.348 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.348 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:11.348 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:11.348 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:11.348 Initialization complete. Launching workers. 00:15:11.348 Starting thread on core 2 00:15:11.348 Starting thread on core 3 00:15:11.348 Starting thread on core 1 00:15:11.348 19:22:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:11.607 [2024-10-17 19:22:35.158040] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.894 [2024-10-17 19:22:38.212862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.894 Initializing NVMe Controllers 00:15:14.894 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.894 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.894 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:14.894 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:14.894 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:14.894 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:14.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:14.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:14.894 Initialization complete. Launching workers. 00:15:14.894 Starting thread on core 1 with urgent priority queue 00:15:14.894 Starting thread on core 2 with urgent priority queue 00:15:14.894 Starting thread on core 3 with urgent priority queue 00:15:14.894 Starting thread on core 0 with urgent priority queue 00:15:14.894 SPDK bdev Controller (SPDK2 ) core 0: 6438.00 IO/s 15.53 secs/100000 ios 00:15:14.894 SPDK bdev Controller (SPDK2 ) core 1: 7083.00 IO/s 14.12 secs/100000 ios 00:15:14.894 SPDK bdev Controller (SPDK2 ) core 2: 5711.67 IO/s 17.51 secs/100000 ios 00:15:14.894 SPDK bdev Controller (SPDK2 ) core 3: 8015.67 IO/s 12.48 secs/100000 ios 00:15:14.894 ======================================================== 00:15:14.894 00:15:14.894 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:14.894 [2024-10-17 19:22:38.496016] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:14.894 Initializing NVMe Controllers 00:15:14.894 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.894 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:14.894 Namespace ID: 1 size: 0GB 00:15:14.894 Initialization complete. 00:15:14.894 INFO: using host memory buffer for IO 00:15:14.894 Hello world! 00:15:14.894 [2024-10-17 19:22:38.506068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:14.894 19:22:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:15.152 [2024-10-17 19:22:38.796401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.530 Initializing NVMe Controllers 00:15:16.530 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.531 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.531 Initialization complete. Launching workers. 00:15:16.531 submit (in ns) avg, min, max = 6689.8, 3158.1, 4001625.7 00:15:16.531 complete (in ns) avg, min, max = 21032.3, 1707.6, 4995758.1 00:15:16.531 00:15:16.531 Submit histogram 00:15:16.531 ================ 00:15:16.531 Range in us Cumulative Count 00:15:16.531 3.154 - 3.170: 0.0299% ( 5) 00:15:16.531 3.170 - 3.185: 0.0479% ( 3) 00:15:16.531 3.185 - 3.200: 0.0718% ( 4) 00:15:16.531 3.200 - 3.215: 0.1616% ( 15) 00:15:16.531 3.215 - 3.230: 0.9698% ( 135) 00:15:16.531 3.230 - 3.246: 4.0290% ( 511) 00:15:16.531 3.246 - 3.261: 9.1535% ( 856) 00:15:16.531 3.261 - 3.276: 15.0383% ( 983) 00:15:16.531 3.276 - 3.291: 21.6116% ( 1098) 00:15:16.531 3.291 - 3.307: 28.4243% ( 1138) 00:15:16.531 3.307 - 3.322: 33.8362% ( 904) 00:15:16.531 3.322 - 3.337: 39.7450% ( 987) 00:15:16.531 3.337 - 3.352: 45.6418% ( 985) 00:15:16.531 3.352 - 3.368: 51.5565% ( 988) 00:15:16.531 3.368 - 3.383: 56.4535% ( 818) 00:15:16.531 3.383 - 3.398: 63.2184% ( 1130) 00:15:16.531 3.398 - 3.413: 69.7079% ( 1084) 00:15:16.531 3.413 - 3.429: 75.0180% ( 887) 00:15:16.531 3.429 - 3.444: 79.8910% ( 814) 00:15:16.531 3.444 - 3.459: 83.2435% ( 560) 00:15:16.531 3.459 - 3.474: 85.6202% ( 397) 00:15:16.531 3.474 - 3.490: 86.8235% ( 201) 00:15:16.531 3.490 - 3.505: 87.5898% ( 128) 00:15:16.531 3.505 - 3.520: 87.9969% ( 68) 00:15:16.531 3.520 - 3.535: 88.4459% ( 75) 00:15:16.531 3.535 - 3.550: 89.1044% ( 110) 00:15:16.531 3.550 - 3.566: 89.9964% ( 149) 00:15:16.531 3.566 - 3.581: 90.8046% ( 135) 00:15:16.531 3.581 - 3.596: 91.8463% ( 174) 00:15:16.531 3.596 - 3.611: 92.6904% ( 141) 00:15:16.531 3.611 - 3.627: 93.7141% ( 171) 00:15:16.531 3.627 - 3.642: 94.6480% ( 156) 00:15:16.531 3.642 - 3.657: 95.6897% ( 174) 00:15:16.531 3.657 - 3.672: 96.4859% ( 133) 00:15:16.531 3.672 - 3.688: 97.3180% ( 139) 00:15:16.531 3.688 - 3.703: 97.8807% ( 94) 00:15:16.531 3.703 - 3.718: 98.3178% ( 73) 00:15:16.531 3.718 - 3.733: 98.6710% ( 59) 00:15:16.531 3.733 - 3.749: 98.9703% ( 50) 00:15:16.531 3.749 - 3.764: 99.2217% ( 42) 00:15:16.531 3.764 - 3.779: 99.4552% ( 39) 00:15:16.531 3.779 - 3.794: 99.5809% ( 21) 00:15:16.531 3.794 - 3.810: 99.6528% ( 12) 00:15:16.531 3.810 - 3.825: 99.6707% ( 3) 00:15:16.531 3.825 - 3.840: 99.6827% ( 2) 00:15:16.531 3.840 - 3.855: 99.7007% ( 3) 00:15:16.531 3.870 - 3.886: 99.7067% ( 1) 00:15:16.531 5.120 - 5.150: 99.7126% ( 1) 00:15:16.531 5.303 - 5.333: 99.7246% ( 2) 00:15:16.531 5.394 - 5.425: 99.7306% ( 1) 00:15:16.531 5.486 - 5.516: 99.7426% ( 2) 00:15:16.531 5.638 - 5.669: 99.7486% ( 1) 00:15:16.531 5.912 - 5.943: 99.7545% ( 1) 00:15:16.531 6.034 - 6.065: 99.7605% ( 1) 00:15:16.531 6.095 - 6.126: 99.7785% ( 3) 00:15:16.531 6.156 - 6.187: 99.7905% ( 2) 00:15:16.531 6.187 - 6.217: 99.7965% ( 1) 00:15:16.531 6.309 - 6.339: 99.8024% ( 1) 00:15:16.531 6.370 - 6.400: 99.8084% ( 1) 00:15:16.531 6.735 - 6.766: 99.8204% ( 2) 00:15:16.531 6.888 - 6.918: 99.8264% ( 1) 00:15:16.531 6.918 - 6.949: 99.8324% ( 1) 00:15:16.531 7.131 - 7.162: 99.8443% ( 2) 00:15:16.531 7.192 - 7.223: 99.8503% ( 1) 00:15:16.531 7.314 - 7.345: 99.8563% ( 1) 00:15:16.531 7.375 - 7.406: 99.8623% ( 1) 00:15:16.531 7.619 - 7.650: 99.8683% ( 1) 00:15:16.531 8.290 - 8.350: 99.8863% ( 3) 00:15:16.531 8.472 - 8.533: 99.8922% ( 1) 00:15:16.531 9.387 - 9.448: 99.8982% ( 1) 00:15:16.531 9.630 - 9.691: 99.9042% ( 1) 00:15:16.531 12.678 - 12.739: 99.9102% ( 1) 00:15:16.531 19.017 - 19.139: 99.9162% ( 1) 00:15:16.531 3167.573 - 3183.177: 99.9222% ( 1) 00:15:16.531 3994.575 - 4025.783: 100.0000% ( 13) 00:15:16.531 00:15:16.531 Complete histogram 00:15:16.531 ================== 00:15:16.531 Range in us Cumulative Count 00:15:16.531 1.707 - 1.714: 0.0239% ( 4) 00:15:16.531 1.714 - 1.722: 0.0898% ( 11) 00:15:16.531 1.722 - [2024-10-17 19:22:39.888559] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.531 1.730: 0.1137% ( 4) 00:15:16.531 1.737 - 1.745: 0.1317% ( 3) 00:15:16.531 1.745 - 1.752: 0.1437% ( 2) 00:15:16.531 1.752 - 1.760: 0.3592% ( 36) 00:15:16.531 1.760 - 1.768: 3.1489% ( 466) 00:15:16.531 1.768 - 1.775: 10.3269% ( 1199) 00:15:16.531 1.775 - 1.783: 15.8525% ( 923) 00:15:16.531 1.783 - 1.790: 17.7981% ( 325) 00:15:16.531 1.790 - 1.798: 19.1691% ( 229) 00:15:16.531 1.798 - 1.806: 20.3125% ( 191) 00:15:16.531 1.806 - 1.813: 21.7253% ( 236) 00:15:16.531 1.813 - 1.821: 30.0108% ( 1384) 00:15:16.531 1.821 - 1.829: 52.8257% ( 3811) 00:15:16.531 1.829 - 1.836: 76.7301% ( 3993) 00:15:16.531 1.836 - 1.844: 87.9610% ( 1876) 00:15:16.531 1.844 - 1.851: 92.5886% ( 773) 00:15:16.531 1.851 - 1.859: 95.0491% ( 411) 00:15:16.531 1.859 - 1.867: 96.7792% ( 289) 00:15:16.531 1.867 - 1.874: 97.6173% ( 140) 00:15:16.531 1.874 - 1.882: 97.9406% ( 54) 00:15:16.531 1.882 - 1.890: 98.1442% ( 34) 00:15:16.531 1.890 - 1.897: 98.4016% ( 43) 00:15:16.531 1.897 - 1.905: 98.6650% ( 44) 00:15:16.531 1.905 - 1.912: 98.8326% ( 28) 00:15:16.531 1.912 - 1.920: 99.0182% ( 31) 00:15:16.531 1.920 - 1.928: 99.0960% ( 13) 00:15:16.531 1.928 - 1.935: 99.1679% ( 12) 00:15:16.531 1.935 - 1.943: 99.2098% ( 7) 00:15:16.531 1.943 - 1.950: 99.2696% ( 10) 00:15:16.531 1.950 - 1.966: 99.3056% ( 6) 00:15:16.531 1.966 - 1.981: 99.3235% ( 3) 00:15:16.531 1.996 - 2.011: 99.3295% ( 1) 00:15:16.531 2.316 - 2.331: 99.3355% ( 1) 00:15:16.531 3.764 - 3.779: 99.3415% ( 1) 00:15:16.531 3.810 - 3.825: 99.3475% ( 1) 00:15:16.531 3.886 - 3.901: 99.3534% ( 1) 00:15:16.531 4.175 - 4.206: 99.3594% ( 1) 00:15:16.531 4.267 - 4.297: 99.3654% ( 1) 00:15:16.531 4.480 - 4.510: 99.3714% ( 1) 00:15:16.531 5.059 - 5.090: 99.3774% ( 1) 00:15:16.531 5.120 - 5.150: 99.3894% ( 2) 00:15:16.531 5.211 - 5.242: 99.4013% ( 2) 00:15:16.531 5.303 - 5.333: 99.4073% ( 1) 00:15:16.531 5.516 - 5.547: 99.4133% ( 1) 00:15:16.531 5.638 - 5.669: 99.4193% ( 1) 00:15:16.531 5.760 - 5.790: 99.4253% ( 1) 00:15:16.531 5.882 - 5.912: 99.4373% ( 2) 00:15:16.531 6.004 - 6.034: 99.4432% ( 1) 00:15:16.531 6.065 - 6.095: 99.4492% ( 1) 00:15:16.531 6.187 - 6.217: 99.4552% ( 1) 00:15:16.531 6.217 - 6.248: 99.4612% ( 1) 00:15:16.531 6.278 - 6.309: 99.4672% ( 1) 00:15:16.531 6.309 - 6.339: 99.4732% ( 1) 00:15:16.531 6.674 - 6.705: 99.4792% ( 1) 00:15:16.531 6.796 - 6.827: 99.4852% ( 1) 00:15:16.531 6.979 - 7.010: 99.4911% ( 1) 00:15:16.531 7.619 - 7.650: 99.4971% ( 1) 00:15:16.531 8.533 - 8.594: 99.5031% ( 1) 00:15:16.531 10.423 - 10.484: 99.5091% ( 1) 00:15:16.531 38.278 - 38.522: 99.5151% ( 1) 00:15:16.531 138.484 - 139.459: 99.5211% ( 1) 00:15:16.531 3011.535 - 3027.139: 99.5271% ( 1) 00:15:16.531 3994.575 - 4025.783: 99.9880% ( 77) 00:15:16.531 4993.219 - 5024.427: 100.0000% ( 2) 00:15:16.531 00:15:16.531 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:16.531 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:16.531 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:16.531 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:16.531 19:22:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:16.531 [ 00:15:16.531 { 00:15:16.531 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:16.531 "subtype": "Discovery", 00:15:16.531 "listen_addresses": [], 00:15:16.531 "allow_any_host": true, 00:15:16.531 "hosts": [] 00:15:16.531 }, 00:15:16.531 { 00:15:16.531 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:16.531 "subtype": "NVMe", 00:15:16.531 "listen_addresses": [ 00:15:16.531 { 00:15:16.531 "trtype": "VFIOUSER", 00:15:16.531 "adrfam": "IPv4", 00:15:16.531 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:16.531 "trsvcid": "0" 00:15:16.531 } 00:15:16.531 ], 00:15:16.531 "allow_any_host": true, 00:15:16.531 "hosts": [], 00:15:16.531 "serial_number": "SPDK1", 00:15:16.531 "model_number": "SPDK bdev Controller", 00:15:16.531 "max_namespaces": 32, 00:15:16.531 "min_cntlid": 1, 00:15:16.531 "max_cntlid": 65519, 00:15:16.531 "namespaces": [ 00:15:16.531 { 00:15:16.531 "nsid": 1, 00:15:16.531 "bdev_name": "Malloc1", 00:15:16.531 "name": "Malloc1", 00:15:16.531 "nguid": "32EDB03C0DEA4BADB3148771293CC640", 00:15:16.531 "uuid": "32edb03c-0dea-4bad-b314-8771293cc640" 00:15:16.531 }, 00:15:16.531 { 00:15:16.531 "nsid": 2, 00:15:16.531 "bdev_name": "Malloc3", 00:15:16.531 "name": "Malloc3", 00:15:16.531 "nguid": "6D29C009ADD640C992D00C3E952976D0", 00:15:16.531 "uuid": "6d29c009-add6-40c9-92d0-0c3e952976d0" 00:15:16.531 } 00:15:16.531 ] 00:15:16.531 }, 00:15:16.531 { 00:15:16.532 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:16.532 "subtype": "NVMe", 00:15:16.532 "listen_addresses": [ 00:15:16.532 { 00:15:16.532 "trtype": "VFIOUSER", 00:15:16.532 "adrfam": "IPv4", 00:15:16.532 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:16.532 "trsvcid": "0" 00:15:16.532 } 00:15:16.532 ], 00:15:16.532 "allow_any_host": true, 00:15:16.532 "hosts": [], 00:15:16.532 "serial_number": "SPDK2", 00:15:16.532 "model_number": "SPDK bdev Controller", 00:15:16.532 "max_namespaces": 32, 00:15:16.532 "min_cntlid": 1, 00:15:16.532 "max_cntlid": 65519, 00:15:16.532 "namespaces": [ 00:15:16.532 { 00:15:16.532 "nsid": 1, 00:15:16.532 "bdev_name": "Malloc2", 00:15:16.532 "name": "Malloc2", 00:15:16.532 "nguid": "86DF3B35181341709AC1BA9B616BF053", 00:15:16.532 "uuid": "86df3b35-1813-4170-9ac1-ba9b616bf053" 00:15:16.532 } 00:15:16.532 ] 00:15:16.532 } 00:15:16.532 ] 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2071217 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:16.532 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:16.532 [2024-10-17 19:22:40.304062] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.791 Malloc4 00:15:16.791 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:16.791 [2024-10-17 19:22:40.554828] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:17.050 Asynchronous Event Request test 00:15:17.050 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.050 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.050 Registering asynchronous event callbacks... 00:15:17.050 Starting namespace attribute notice tests for all controllers... 00:15:17.050 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:17.050 aer_cb - Changed Namespace 00:15:17.050 Cleaning up... 00:15:17.050 [ 00:15:17.050 { 00:15:17.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:17.050 "subtype": "Discovery", 00:15:17.050 "listen_addresses": [], 00:15:17.050 "allow_any_host": true, 00:15:17.050 "hosts": [] 00:15:17.050 }, 00:15:17.050 { 00:15:17.050 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:17.050 "subtype": "NVMe", 00:15:17.050 "listen_addresses": [ 00:15:17.050 { 00:15:17.050 "trtype": "VFIOUSER", 00:15:17.050 "adrfam": "IPv4", 00:15:17.050 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:17.050 "trsvcid": "0" 00:15:17.050 } 00:15:17.050 ], 00:15:17.050 "allow_any_host": true, 00:15:17.050 "hosts": [], 00:15:17.050 "serial_number": "SPDK1", 00:15:17.050 "model_number": "SPDK bdev Controller", 00:15:17.050 "max_namespaces": 32, 00:15:17.050 "min_cntlid": 1, 00:15:17.050 "max_cntlid": 65519, 00:15:17.050 "namespaces": [ 00:15:17.050 { 00:15:17.050 "nsid": 1, 00:15:17.050 "bdev_name": "Malloc1", 00:15:17.050 "name": "Malloc1", 00:15:17.050 "nguid": "32EDB03C0DEA4BADB3148771293CC640", 00:15:17.050 "uuid": "32edb03c-0dea-4bad-b314-8771293cc640" 00:15:17.050 }, 00:15:17.050 { 00:15:17.050 "nsid": 2, 00:15:17.050 "bdev_name": "Malloc3", 00:15:17.050 "name": "Malloc3", 00:15:17.050 "nguid": "6D29C009ADD640C992D00C3E952976D0", 00:15:17.050 "uuid": "6d29c009-add6-40c9-92d0-0c3e952976d0" 00:15:17.050 } 00:15:17.050 ] 00:15:17.050 }, 00:15:17.050 { 00:15:17.050 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:17.050 "subtype": "NVMe", 00:15:17.050 "listen_addresses": [ 00:15:17.050 { 00:15:17.050 "trtype": "VFIOUSER", 00:15:17.050 "adrfam": "IPv4", 00:15:17.050 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:17.050 "trsvcid": "0" 00:15:17.050 } 00:15:17.050 ], 00:15:17.050 "allow_any_host": true, 00:15:17.050 "hosts": [], 00:15:17.050 "serial_number": "SPDK2", 00:15:17.050 "model_number": "SPDK bdev Controller", 00:15:17.050 "max_namespaces": 32, 00:15:17.050 "min_cntlid": 1, 00:15:17.050 "max_cntlid": 65519, 00:15:17.050 "namespaces": [ 00:15:17.050 { 00:15:17.050 "nsid": 1, 00:15:17.050 "bdev_name": "Malloc2", 00:15:17.050 "name": "Malloc2", 00:15:17.050 "nguid": "86DF3B35181341709AC1BA9B616BF053", 00:15:17.050 "uuid": "86df3b35-1813-4170-9ac1-ba9b616bf053" 00:15:17.050 }, 00:15:17.050 { 00:15:17.050 "nsid": 2, 00:15:17.050 "bdev_name": "Malloc4", 00:15:17.050 "name": "Malloc4", 00:15:17.050 "nguid": "3CC4480A9E054172B2DE289779286385", 00:15:17.050 "uuid": "3cc4480a-9e05-4172-b2de-289779286385" 00:15:17.050 } 00:15:17.050 ] 00:15:17.050 } 00:15:17.050 ] 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2071217 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2063565 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2063565 ']' 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2063565 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2063565 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2063565' 00:15:17.050 killing process with pid 2063565 00:15:17.050 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2063565 00:15:17.051 19:22:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2063565 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2071425 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2071425' 00:15:17.310 Process pid: 2071425 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2071425 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2071425 ']' 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:17.310 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:17.569 [2024-10-17 19:22:41.110863] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:17.569 [2024-10-17 19:22:41.111698] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:15:17.569 [2024-10-17 19:22:41.111735] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.569 [2024-10-17 19:22:41.186829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.569 [2024-10-17 19:22:41.228108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.569 [2024-10-17 19:22:41.228146] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.569 [2024-10-17 19:22:41.228154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.569 [2024-10-17 19:22:41.228160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.569 [2024-10-17 19:22:41.228165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.569 [2024-10-17 19:22:41.232618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.569 [2024-10-17 19:22:41.232644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.569 [2024-10-17 19:22:41.232758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.569 [2024-10-17 19:22:41.232758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.569 [2024-10-17 19:22:41.299033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:17.569 [2024-10-17 19:22:41.299979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:17.569 [2024-10-17 19:22:41.300027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:17.569 [2024-10-17 19:22:41.300275] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:17.569 [2024-10-17 19:22:41.300342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:17.569 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.569 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:17.569 19:22:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:18.947 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.205 Malloc1 00:15:19.205 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.205 19:22:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.464 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.722 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.722 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.722 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:19.980 Malloc2 00:15:19.980 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:20.240 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.240 19:22:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2071425 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2071425 ']' 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2071425 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2071425 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2071425' 00:15:20.499 killing process with pid 2071425 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2071425 00:15:20.499 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2071425 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.758 00:15:20.758 real 0m50.819s 00:15:20.758 user 3m16.637s 00:15:20.758 sys 0m3.212s 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.758 ************************************ 00:15:20.758 END TEST nvmf_vfio_user 00:15:20.758 ************************************ 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:20.758 ************************************ 00:15:20.758 START TEST nvmf_vfio_user_nvme_compliance 00:15:20.758 ************************************ 00:15:20.758 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:21.018 * Looking for test storage... 00:15:21.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.018 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.019 --rc genhtml_branch_coverage=1 00:15:21.019 --rc genhtml_function_coverage=1 00:15:21.019 --rc genhtml_legend=1 00:15:21.019 --rc geninfo_all_blocks=1 00:15:21.019 --rc geninfo_unexecuted_blocks=1 00:15:21.019 00:15:21.019 ' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.019 --rc genhtml_branch_coverage=1 00:15:21.019 --rc genhtml_function_coverage=1 00:15:21.019 --rc genhtml_legend=1 00:15:21.019 --rc geninfo_all_blocks=1 00:15:21.019 --rc geninfo_unexecuted_blocks=1 00:15:21.019 00:15:21.019 ' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.019 --rc genhtml_branch_coverage=1 00:15:21.019 --rc genhtml_function_coverage=1 00:15:21.019 --rc genhtml_legend=1 00:15:21.019 --rc geninfo_all_blocks=1 00:15:21.019 --rc geninfo_unexecuted_blocks=1 00:15:21.019 00:15:21.019 ' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:21.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.019 --rc genhtml_branch_coverage=1 00:15:21.019 --rc genhtml_function_coverage=1 00:15:21.019 --rc genhtml_legend=1 00:15:21.019 --rc geninfo_all_blocks=1 00:15:21.019 --rc geninfo_unexecuted_blocks=1 00:15:21.019 00:15:21.019 ' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2072182 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2072182' 00:15:21.019 Process pid: 2072182 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2072182 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2072182 ']' 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:21.019 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.019 [2024-10-17 19:22:44.738484] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:15:21.019 [2024-10-17 19:22:44.738532] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.278 [2024-10-17 19:22:44.812091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:21.278 [2024-10-17 19:22:44.852950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.278 [2024-10-17 19:22:44.852985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.278 [2024-10-17 19:22:44.852991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.278 [2024-10-17 19:22:44.852997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.278 [2024-10-17 19:22:44.853002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.278 [2024-10-17 19:22:44.854390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:21.278 [2024-10-17 19:22:44.854499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.279 [2024-10-17 19:22:44.854501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:21.279 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.279 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:21.279 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.216 malloc0 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.216 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.475 19:22:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:22.475 00:15:22.475 00:15:22.475 CUnit - A unit testing framework for C - Version 2.1-3 00:15:22.475 http://cunit.sourceforge.net/ 00:15:22.475 00:15:22.475 00:15:22.475 Suite: nvme_compliance 00:15:22.475 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-17 19:22:46.181518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.475 [2024-10-17 19:22:46.182861] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:22.475 [2024-10-17 19:22:46.182875] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:22.475 [2024-10-17 19:22:46.182881] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:22.475 [2024-10-17 19:22:46.184534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.475 passed 00:15:22.734 Test: admin_identify_ctrlr_verify_fused ...[2024-10-17 19:22:46.262086] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.734 [2024-10-17 19:22:46.265112] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.734 passed 00:15:22.734 Test: admin_identify_ns ...[2024-10-17 19:22:46.344839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.734 [2024-10-17 19:22:46.405612] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:22.734 [2024-10-17 19:22:46.413625] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:22.734 [2024-10-17 19:22:46.434720] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.734 passed 00:15:22.734 Test: admin_get_features_mandatory_features ...[2024-10-17 19:22:46.511533] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.734 [2024-10-17 19:22:46.514552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.993 passed 00:15:22.993 Test: admin_get_features_optional_features ...[2024-10-17 19:22:46.590033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.993 [2024-10-17 19:22:46.593047] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:22.993 passed 00:15:22.993 Test: admin_set_features_number_of_queues ...[2024-10-17 19:22:46.670754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:22.993 [2024-10-17 19:22:46.776694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.255 passed 00:15:23.255 Test: admin_get_log_page_mandatory_logs ...[2024-10-17 19:22:46.850365] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.255 [2024-10-17 19:22:46.853386] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.255 passed 00:15:23.255 Test: admin_get_log_page_with_lpo ...[2024-10-17 19:22:46.930918] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.255 [2024-10-17 19:22:47.002608] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:23.255 [2024-10-17 19:22:47.015675] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.255 passed 00:15:23.516 Test: fabric_property_get ...[2024-10-17 19:22:47.091474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.516 [2024-10-17 19:22:47.092711] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:23.516 [2024-10-17 19:22:47.094491] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.516 passed 00:15:23.516 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-17 19:22:47.169967] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.516 [2024-10-17 19:22:47.171213] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:23.516 [2024-10-17 19:22:47.172992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.516 passed 00:15:23.516 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-17 19:22:47.250892] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.775 [2024-10-17 19:22:47.334613] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:23.775 [2024-10-17 19:22:47.350607] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:23.775 [2024-10-17 19:22:47.355692] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.775 passed 00:15:23.775 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-17 19:22:47.431312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:23.775 [2024-10-17 19:22:47.432544] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:23.775 [2024-10-17 19:22:47.434336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:23.775 passed 00:15:23.775 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-17 19:22:47.512047] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.033 [2024-10-17 19:22:47.588609] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:24.033 [2024-10-17 19:22:47.612611] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:24.033 [2024-10-17 19:22:47.617693] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.033 passed 00:15:24.033 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-17 19:22:47.694441] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.033 [2024-10-17 19:22:47.695677] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:24.033 [2024-10-17 19:22:47.695701] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:24.033 [2024-10-17 19:22:47.697458] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.033 passed 00:15:24.033 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-17 19:22:47.772874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.292 [2024-10-17 19:22:47.868626] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:24.292 [2024-10-17 19:22:47.876607] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:24.292 [2024-10-17 19:22:47.884615] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:24.292 [2024-10-17 19:22:47.892610] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:24.292 [2024-10-17 19:22:47.921712] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.292 passed 00:15:24.292 Test: admin_create_io_sq_verify_pc ...[2024-10-17 19:22:47.996439] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:24.292 [2024-10-17 19:22:48.012618] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:24.292 [2024-10-17 19:22:48.030614] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:24.292 passed 00:15:24.551 Test: admin_create_io_qp_max_qps ...[2024-10-17 19:22:48.108164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:25.487 [2024-10-17 19:22:49.203611] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:26.055 [2024-10-17 19:22:49.591652] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.055 passed 00:15:26.055 Test: admin_create_io_sq_shared_cq ...[2024-10-17 19:22:49.667616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.055 [2024-10-17 19:22:49.801612] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:26.055 [2024-10-17 19:22:49.838671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.314 passed 00:15:26.314 00:15:26.314 Run Summary: Type Total Ran Passed Failed Inactive 00:15:26.314 suites 1 1 n/a 0 0 00:15:26.314 tests 18 18 18 0 0 00:15:26.314 asserts 360 360 360 0 n/a 00:15:26.314 00:15:26.314 Elapsed time = 1.500 seconds 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2072182 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2072182 ']' 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2072182 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2072182 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2072182' 00:15:26.314 killing process with pid 2072182 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2072182 00:15:26.314 19:22:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2072182 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:26.573 00:15:26.573 real 0m5.636s 00:15:26.573 user 0m15.726s 00:15:26.573 sys 0m0.542s 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.573 ************************************ 00:15:26.573 END TEST nvmf_vfio_user_nvme_compliance 00:15:26.573 ************************************ 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.573 ************************************ 00:15:26.573 START TEST nvmf_vfio_user_fuzz 00:15:26.573 ************************************ 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:26.573 * Looking for test storage... 00:15:26.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:26.573 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.834 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.835 --rc genhtml_branch_coverage=1 00:15:26.835 --rc genhtml_function_coverage=1 00:15:26.835 --rc genhtml_legend=1 00:15:26.835 --rc geninfo_all_blocks=1 00:15:26.835 --rc geninfo_unexecuted_blocks=1 00:15:26.835 00:15:26.835 ' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.835 --rc genhtml_branch_coverage=1 00:15:26.835 --rc genhtml_function_coverage=1 00:15:26.835 --rc genhtml_legend=1 00:15:26.835 --rc geninfo_all_blocks=1 00:15:26.835 --rc geninfo_unexecuted_blocks=1 00:15:26.835 00:15:26.835 ' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.835 --rc genhtml_branch_coverage=1 00:15:26.835 --rc genhtml_function_coverage=1 00:15:26.835 --rc genhtml_legend=1 00:15:26.835 --rc geninfo_all_blocks=1 00:15:26.835 --rc geninfo_unexecuted_blocks=1 00:15:26.835 00:15:26.835 ' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:26.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.835 --rc genhtml_branch_coverage=1 00:15:26.835 --rc genhtml_function_coverage=1 00:15:26.835 --rc genhtml_legend=1 00:15:26.835 --rc geninfo_all_blocks=1 00:15:26.835 --rc geninfo_unexecuted_blocks=1 00:15:26.835 00:15:26.835 ' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2073170 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2073170' 00:15:26.835 Process pid: 2073170 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2073170 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2073170 ']' 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.835 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:27.095 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.095 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:27.095 19:22:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 malloc0 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:28.049 19:22:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:00.135 Fuzzing completed. Shutting down the fuzz application 00:16:00.135 00:16:00.135 Dumping successful admin opcodes: 00:16:00.135 8, 9, 10, 24, 00:16:00.135 Dumping successful io opcodes: 00:16:00.135 0, 00:16:00.136 NS: 0x20000081ef00 I/O qp, Total commands completed: 1038910, total successful commands: 4101, random_seed: 4219741632 00:16:00.136 NS: 0x20000081ef00 admin qp, Total commands completed: 255152, total successful commands: 2060, random_seed: 811806656 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2073170 ']' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2073170' 00:16:00.136 killing process with pid 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2073170 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:00.136 00:16:00.136 real 0m32.222s 00:16:00.136 user 0m30.459s 00:16:00.136 sys 0m30.890s 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:00.136 ************************************ 00:16:00.136 END TEST nvmf_vfio_user_fuzz 00:16:00.136 ************************************ 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.136 ************************************ 00:16:00.136 START TEST nvmf_auth_target 00:16:00.136 ************************************ 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:00.136 * Looking for test storage... 00:16:00.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:00.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.136 --rc genhtml_branch_coverage=1 00:16:00.136 --rc genhtml_function_coverage=1 00:16:00.136 --rc genhtml_legend=1 00:16:00.136 --rc geninfo_all_blocks=1 00:16:00.136 --rc geninfo_unexecuted_blocks=1 00:16:00.136 00:16:00.136 ' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:00.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.136 --rc genhtml_branch_coverage=1 00:16:00.136 --rc genhtml_function_coverage=1 00:16:00.136 --rc genhtml_legend=1 00:16:00.136 --rc geninfo_all_blocks=1 00:16:00.136 --rc geninfo_unexecuted_blocks=1 00:16:00.136 00:16:00.136 ' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:00.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.136 --rc genhtml_branch_coverage=1 00:16:00.136 --rc genhtml_function_coverage=1 00:16:00.136 --rc genhtml_legend=1 00:16:00.136 --rc geninfo_all_blocks=1 00:16:00.136 --rc geninfo_unexecuted_blocks=1 00:16:00.136 00:16:00.136 ' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:00.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.136 --rc genhtml_branch_coverage=1 00:16:00.136 --rc genhtml_function_coverage=1 00:16:00.136 --rc genhtml_legend=1 00:16:00.136 --rc geninfo_all_blocks=1 00:16:00.136 --rc geninfo_unexecuted_blocks=1 00:16:00.136 00:16:00.136 ' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.136 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:00.137 19:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:05.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:05.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:05.463 Found net devices under 0000:86:00.0: cvl_0_0 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:05.463 Found net devices under 0000:86:00.1: cvl_0_1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:05.463 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:05.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:05.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:16:05.464 00:16:05.464 --- 10.0.0.2 ping statistics --- 00:16:05.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.464 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:05.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:05.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:16:05.464 00:16:05.464 --- 10.0.0.1 ping statistics --- 00:16:05.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:05.464 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2081468 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2081468 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2081468 ']' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2081488 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2601e8eec861154c3761391e4046e43352cfe674ee4dd48c 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ikM 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2601e8eec861154c3761391e4046e43352cfe674ee4dd48c 0 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2601e8eec861154c3761391e4046e43352cfe674ee4dd48c 0 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2601e8eec861154c3761391e4046e43352cfe674ee4dd48c 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.464 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ikM 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ikM 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.ikM 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=3476db2ac085788e7d5440004d99e038954b6140ae5eea7e813f92822416cf20 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.tRl 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 3476db2ac085788e7d5440004d99e038954b6140ae5eea7e813f92822416cf20 3 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 3476db2ac085788e7d5440004d99e038954b6140ae5eea7e813f92822416cf20 3 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=3476db2ac085788e7d5440004d99e038954b6140ae5eea7e813f92822416cf20 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.tRl 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.tRl 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.tRl 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=49f1bf1cd158c25c046bcdf6b5d34781 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.vSD 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 49f1bf1cd158c25c046bcdf6b5d34781 1 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 49f1bf1cd158c25c046bcdf6b5d34781 1 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=49f1bf1cd158c25c046bcdf6b5d34781 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.464 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.vSD 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.vSD 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.vSD 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=cc0e6d612a1f646139c880a59ce6a17c5acb7246d62778c7 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Ora 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key cc0e6d612a1f646139c880a59ce6a17c5acb7246d62778c7 2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 cc0e6d612a1f646139c880a59ce6a17c5acb7246d62778c7 2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=cc0e6d612a1f646139c880a59ce6a17c5acb7246d62778c7 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Ora 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Ora 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Ora 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=582157ff83f7ec64bc78ed1aa0eef46f1eebd30461151316 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.xAC 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 582157ff83f7ec64bc78ed1aa0eef46f1eebd30461151316 2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 582157ff83f7ec64bc78ed1aa0eef46f1eebd30461151316 2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=582157ff83f7ec64bc78ed1aa0eef46f1eebd30461151316 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:16:05.465 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.xAC 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.xAC 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xAC 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=331317cb63076782be177f3655deb789 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.Kta 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 331317cb63076782be177f3655deb789 1 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 331317cb63076782be177f3655deb789 1 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=331317cb63076782be177f3655deb789 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.Kta 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.Kta 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Kta 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=1ba77f57d9cb0bec163b5b124aa27aa6e633d5fc0d141ab4b34aa76cc63de606 00:16:05.769 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.grL 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 1ba77f57d9cb0bec163b5b124aa27aa6e633d5fc0d141ab4b34aa76cc63de606 3 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 1ba77f57d9cb0bec163b5b124aa27aa6e633d5fc0d141ab4b34aa76cc63de606 3 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=1ba77f57d9cb0bec163b5b124aa27aa6e633d5fc0d141ab4b34aa76cc63de606 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.grL 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.grL 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.grL 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2081468 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2081468 ']' 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.770 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.048 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.048 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:06.048 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2081488 /var/tmp/host.sock 00:16:06.048 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2081488 ']' 00:16:06.048 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:06.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ikM 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ikM 00:16:06.049 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ikM 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.tRl ]] 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRl 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRl 00:16:06.308 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRl 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vSD 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vSD 00:16:06.567 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vSD 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Ora ]] 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ora 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ora 00:16:06.826 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ora 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xAC 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xAC 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xAC 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Kta ]] 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kta 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kta 00:16:07.085 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kta 00:16:07.343 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:07.344 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.grL 00:16:07.344 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.344 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.344 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.344 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.grL 00:16:07.344 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.grL 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.603 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.863 00:16:07.863 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.863 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.863 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.121 { 00:16:08.121 "cntlid": 1, 00:16:08.121 "qid": 0, 00:16:08.121 "state": "enabled", 00:16:08.121 "thread": "nvmf_tgt_poll_group_000", 00:16:08.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:08.121 "listen_address": { 00:16:08.121 "trtype": "TCP", 00:16:08.121 "adrfam": "IPv4", 00:16:08.121 "traddr": "10.0.0.2", 00:16:08.121 "trsvcid": "4420" 00:16:08.121 }, 00:16:08.121 "peer_address": { 00:16:08.121 "trtype": "TCP", 00:16:08.121 "adrfam": "IPv4", 00:16:08.121 "traddr": "10.0.0.1", 00:16:08.121 "trsvcid": "44856" 00:16:08.121 }, 00:16:08.121 "auth": { 00:16:08.121 "state": "completed", 00:16:08.121 "digest": "sha256", 00:16:08.121 "dhgroup": "null" 00:16:08.121 } 00:16:08.121 } 00:16:08.121 ]' 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.121 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.122 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.380 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.380 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.380 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.380 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.380 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.380 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:08.380 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:08.947 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.209 19:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.468 00:16:09.468 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.468 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.468 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.732 { 00:16:09.732 "cntlid": 3, 00:16:09.732 "qid": 0, 00:16:09.732 "state": "enabled", 00:16:09.732 "thread": "nvmf_tgt_poll_group_000", 00:16:09.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:09.732 "listen_address": { 00:16:09.732 "trtype": "TCP", 00:16:09.732 "adrfam": "IPv4", 00:16:09.732 "traddr": "10.0.0.2", 00:16:09.732 "trsvcid": "4420" 00:16:09.732 }, 00:16:09.732 "peer_address": { 00:16:09.732 "trtype": "TCP", 00:16:09.732 "adrfam": "IPv4", 00:16:09.732 "traddr": "10.0.0.1", 00:16:09.732 "trsvcid": "44880" 00:16:09.732 }, 00:16:09.732 "auth": { 00:16:09.732 "state": "completed", 00:16:09.732 "digest": "sha256", 00:16:09.732 "dhgroup": "null" 00:16:09.732 } 00:16:09.732 } 00:16:09.732 ]' 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.732 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.990 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:09.990 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.558 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.816 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.075 00:16:11.075 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.075 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.075 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.333 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.334 { 00:16:11.334 "cntlid": 5, 00:16:11.334 "qid": 0, 00:16:11.334 "state": "enabled", 00:16:11.334 "thread": "nvmf_tgt_poll_group_000", 00:16:11.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:11.334 "listen_address": { 00:16:11.334 "trtype": "TCP", 00:16:11.334 "adrfam": "IPv4", 00:16:11.334 "traddr": "10.0.0.2", 00:16:11.334 "trsvcid": "4420" 00:16:11.334 }, 00:16:11.334 "peer_address": { 00:16:11.334 "trtype": "TCP", 00:16:11.334 "adrfam": "IPv4", 00:16:11.334 "traddr": "10.0.0.1", 00:16:11.334 "trsvcid": "41680" 00:16:11.334 }, 00:16:11.334 "auth": { 00:16:11.334 "state": "completed", 00:16:11.334 "digest": "sha256", 00:16:11.334 "dhgroup": "null" 00:16:11.334 } 00:16:11.334 } 00:16:11.334 ]' 00:16:11.334 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.334 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.334 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.334 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:11.334 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.334 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.334 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.334 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.592 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:11.592 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.160 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.419 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.678 00:16:12.678 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.678 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.678 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.936 { 00:16:12.936 "cntlid": 7, 00:16:12.936 "qid": 0, 00:16:12.936 "state": "enabled", 00:16:12.936 "thread": "nvmf_tgt_poll_group_000", 00:16:12.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:12.936 "listen_address": { 00:16:12.936 "trtype": "TCP", 00:16:12.936 "adrfam": "IPv4", 00:16:12.936 "traddr": "10.0.0.2", 00:16:12.936 "trsvcid": "4420" 00:16:12.936 }, 00:16:12.936 "peer_address": { 00:16:12.936 "trtype": "TCP", 00:16:12.936 "adrfam": "IPv4", 00:16:12.936 "traddr": "10.0.0.1", 00:16:12.936 "trsvcid": "41706" 00:16:12.936 }, 00:16:12.936 "auth": { 00:16:12.936 "state": "completed", 00:16:12.936 "digest": "sha256", 00:16:12.936 "dhgroup": "null" 00:16:12.936 } 00:16:12.936 } 00:16:12.936 ]' 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.936 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.937 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.195 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:13.195 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.763 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.021 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.280 00:16:14.280 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.280 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.280 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.280 { 00:16:14.280 "cntlid": 9, 00:16:14.280 "qid": 0, 00:16:14.280 "state": "enabled", 00:16:14.280 "thread": "nvmf_tgt_poll_group_000", 00:16:14.280 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:14.280 "listen_address": { 00:16:14.280 "trtype": "TCP", 00:16:14.280 "adrfam": "IPv4", 00:16:14.280 "traddr": "10.0.0.2", 00:16:14.280 "trsvcid": "4420" 00:16:14.280 }, 00:16:14.280 "peer_address": { 00:16:14.280 "trtype": "TCP", 00:16:14.280 "adrfam": "IPv4", 00:16:14.280 "traddr": "10.0.0.1", 00:16:14.280 "trsvcid": "41732" 00:16:14.280 }, 00:16:14.280 "auth": { 00:16:14.280 "state": "completed", 00:16:14.280 "digest": "sha256", 00:16:14.280 "dhgroup": "ffdhe2048" 00:16:14.280 } 00:16:14.280 } 00:16:14.280 ]' 00:16:14.280 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.538 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.797 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:14.797 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.365 19:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.365 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.623 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.623 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.623 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.623 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.623 00:16:15.881 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.882 { 00:16:15.882 "cntlid": 11, 00:16:15.882 "qid": 0, 00:16:15.882 "state": "enabled", 00:16:15.882 "thread": "nvmf_tgt_poll_group_000", 00:16:15.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.882 "listen_address": { 00:16:15.882 "trtype": "TCP", 00:16:15.882 "adrfam": "IPv4", 00:16:15.882 "traddr": "10.0.0.2", 00:16:15.882 "trsvcid": "4420" 00:16:15.882 }, 00:16:15.882 "peer_address": { 00:16:15.882 "trtype": "TCP", 00:16:15.882 "adrfam": "IPv4", 00:16:15.882 "traddr": "10.0.0.1", 00:16:15.882 "trsvcid": "41752" 00:16:15.882 }, 00:16:15.882 "auth": { 00:16:15.882 "state": "completed", 00:16:15.882 "digest": "sha256", 00:16:15.882 "dhgroup": "ffdhe2048" 00:16:15.882 } 00:16:15.882 } 00:16:15.882 ]' 00:16:15.882 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.141 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.142 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.400 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:16.400 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.969 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.969 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.228 00:16:17.228 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.228 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.228 19:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.486 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.486 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.486 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.486 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.486 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.487 { 00:16:17.487 "cntlid": 13, 00:16:17.487 "qid": 0, 00:16:17.487 "state": "enabled", 00:16:17.487 "thread": "nvmf_tgt_poll_group_000", 00:16:17.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:17.487 "listen_address": { 00:16:17.487 "trtype": "TCP", 00:16:17.487 "adrfam": "IPv4", 00:16:17.487 "traddr": "10.0.0.2", 00:16:17.487 "trsvcid": "4420" 00:16:17.487 }, 00:16:17.487 "peer_address": { 00:16:17.487 "trtype": "TCP", 00:16:17.487 "adrfam": "IPv4", 00:16:17.487 "traddr": "10.0.0.1", 00:16:17.487 "trsvcid": "41770" 00:16:17.487 }, 00:16:17.487 "auth": { 00:16:17.487 "state": "completed", 00:16:17.487 "digest": "sha256", 00:16:17.487 "dhgroup": "ffdhe2048" 00:16:17.487 } 00:16:17.487 } 00:16:17.487 ]' 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.487 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.745 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.745 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.745 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.745 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:17.745 19:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.313 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.573 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.832 00:16:18.832 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.832 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.832 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.091 { 00:16:19.091 "cntlid": 15, 00:16:19.091 "qid": 0, 00:16:19.091 "state": "enabled", 00:16:19.091 "thread": "nvmf_tgt_poll_group_000", 00:16:19.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:19.091 "listen_address": { 00:16:19.091 "trtype": "TCP", 00:16:19.091 "adrfam": "IPv4", 00:16:19.091 "traddr": "10.0.0.2", 00:16:19.091 "trsvcid": "4420" 00:16:19.091 }, 00:16:19.091 "peer_address": { 00:16:19.091 "trtype": "TCP", 00:16:19.091 "adrfam": "IPv4", 00:16:19.091 "traddr": "10.0.0.1", 00:16:19.091 "trsvcid": "41798" 00:16:19.091 }, 00:16:19.091 "auth": { 00:16:19.091 "state": "completed", 00:16:19.091 "digest": "sha256", 00:16:19.091 "dhgroup": "ffdhe2048" 00:16:19.091 } 00:16:19.091 } 00:16:19.091 ]' 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.091 19:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.350 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:19.350 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.918 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.177 19:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.436 00:16:20.436 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.436 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.436 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.696 { 00:16:20.696 "cntlid": 17, 00:16:20.696 "qid": 0, 00:16:20.696 "state": "enabled", 00:16:20.696 "thread": "nvmf_tgt_poll_group_000", 00:16:20.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.696 "listen_address": { 00:16:20.696 "trtype": "TCP", 00:16:20.696 "adrfam": "IPv4", 00:16:20.696 "traddr": "10.0.0.2", 00:16:20.696 "trsvcid": "4420" 00:16:20.696 }, 00:16:20.696 "peer_address": { 00:16:20.696 "trtype": "TCP", 00:16:20.696 "adrfam": "IPv4", 00:16:20.696 "traddr": "10.0.0.1", 00:16:20.696 "trsvcid": "36102" 00:16:20.696 }, 00:16:20.696 "auth": { 00:16:20.696 "state": "completed", 00:16:20.696 "digest": "sha256", 00:16:20.696 "dhgroup": "ffdhe3072" 00:16:20.696 } 00:16:20.696 } 00:16:20.696 ]' 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.696 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.954 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:20.955 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.523 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.782 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.041 00:16:22.041 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.041 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.041 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.300 { 00:16:22.300 "cntlid": 19, 00:16:22.300 "qid": 0, 00:16:22.300 "state": "enabled", 00:16:22.300 "thread": "nvmf_tgt_poll_group_000", 00:16:22.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:22.300 "listen_address": { 00:16:22.300 "trtype": "TCP", 00:16:22.300 "adrfam": "IPv4", 00:16:22.300 "traddr": "10.0.0.2", 00:16:22.300 "trsvcid": "4420" 00:16:22.300 }, 00:16:22.300 "peer_address": { 00:16:22.300 "trtype": "TCP", 00:16:22.300 "adrfam": "IPv4", 00:16:22.300 "traddr": "10.0.0.1", 00:16:22.300 "trsvcid": "36134" 00:16:22.300 }, 00:16:22.300 "auth": { 00:16:22.300 "state": "completed", 00:16:22.300 "digest": "sha256", 00:16:22.300 "dhgroup": "ffdhe3072" 00:16:22.300 } 00:16:22.300 } 00:16:22.300 ]' 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.300 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.559 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:22.559 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.127 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.386 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.387 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.387 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.644 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.644 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.644 { 00:16:23.644 "cntlid": 21, 00:16:23.644 "qid": 0, 00:16:23.644 "state": "enabled", 00:16:23.644 "thread": "nvmf_tgt_poll_group_000", 00:16:23.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.644 "listen_address": { 00:16:23.644 "trtype": "TCP", 00:16:23.644 "adrfam": "IPv4", 00:16:23.644 "traddr": "10.0.0.2", 00:16:23.644 "trsvcid": "4420" 00:16:23.644 }, 00:16:23.644 "peer_address": { 00:16:23.644 "trtype": "TCP", 00:16:23.644 "adrfam": "IPv4", 00:16:23.644 "traddr": "10.0.0.1", 00:16:23.644 "trsvcid": "36152" 00:16:23.644 }, 00:16:23.644 "auth": { 00:16:23.644 "state": "completed", 00:16:23.644 "digest": "sha256", 00:16:23.644 "dhgroup": "ffdhe3072" 00:16:23.644 } 00:16:23.644 } 00:16:23.644 ]' 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.901 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.160 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:24.160 19:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.729 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.988 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.248 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.248 19:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.248 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.248 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.248 { 00:16:25.248 "cntlid": 23, 00:16:25.248 "qid": 0, 00:16:25.248 "state": "enabled", 00:16:25.248 "thread": "nvmf_tgt_poll_group_000", 00:16:25.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.248 "listen_address": { 00:16:25.248 "trtype": "TCP", 00:16:25.248 "adrfam": "IPv4", 00:16:25.248 "traddr": "10.0.0.2", 00:16:25.248 "trsvcid": "4420" 00:16:25.248 }, 00:16:25.248 "peer_address": { 00:16:25.248 "trtype": "TCP", 00:16:25.248 "adrfam": "IPv4", 00:16:25.248 "traddr": "10.0.0.1", 00:16:25.248 "trsvcid": "36174" 00:16:25.248 }, 00:16:25.248 "auth": { 00:16:25.248 "state": "completed", 00:16:25.248 "digest": "sha256", 00:16:25.248 "dhgroup": "ffdhe3072" 00:16:25.248 } 00:16:25.248 } 00:16:25.248 ]' 00:16:25.248 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.507 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.765 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:25.765 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.333 19:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.333 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.592 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.851 { 00:16:26.851 "cntlid": 25, 00:16:26.851 "qid": 0, 00:16:26.851 "state": "enabled", 00:16:26.851 "thread": "nvmf_tgt_poll_group_000", 00:16:26.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:26.851 "listen_address": { 00:16:26.851 "trtype": "TCP", 00:16:26.851 "adrfam": "IPv4", 00:16:26.851 "traddr": "10.0.0.2", 00:16:26.851 "trsvcid": "4420" 00:16:26.851 }, 00:16:26.851 "peer_address": { 00:16:26.851 "trtype": "TCP", 00:16:26.851 "adrfam": "IPv4", 00:16:26.851 "traddr": "10.0.0.1", 00:16:26.851 "trsvcid": "36196" 00:16:26.851 }, 00:16:26.851 "auth": { 00:16:26.851 "state": "completed", 00:16:26.851 "digest": "sha256", 00:16:26.851 "dhgroup": "ffdhe4096" 00:16:26.851 } 00:16:26.851 } 00:16:26.851 ]' 00:16:26.851 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.110 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.369 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:27.369 19:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.938 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.197 00:16:28.457 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.457 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.457 19:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.457 { 00:16:28.457 "cntlid": 27, 00:16:28.457 "qid": 0, 00:16:28.457 "state": "enabled", 00:16:28.457 "thread": "nvmf_tgt_poll_group_000", 00:16:28.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.457 "listen_address": { 00:16:28.457 "trtype": "TCP", 00:16:28.457 "adrfam": "IPv4", 00:16:28.457 "traddr": "10.0.0.2", 00:16:28.457 "trsvcid": "4420" 00:16:28.457 }, 00:16:28.457 "peer_address": { 00:16:28.457 "trtype": "TCP", 00:16:28.457 "adrfam": "IPv4", 00:16:28.457 "traddr": "10.0.0.1", 00:16:28.457 "trsvcid": "36216" 00:16:28.457 }, 00:16:28.457 "auth": { 00:16:28.457 "state": "completed", 00:16:28.457 "digest": "sha256", 00:16:28.457 "dhgroup": "ffdhe4096" 00:16:28.457 } 00:16:28.457 } 00:16:28.457 ]' 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.457 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.716 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.716 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.716 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.716 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.716 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.975 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:28.975 19:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.543 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.544 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.803 00:16:29.803 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.803 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.803 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.062 { 00:16:30.062 "cntlid": 29, 00:16:30.062 "qid": 0, 00:16:30.062 "state": "enabled", 00:16:30.062 "thread": "nvmf_tgt_poll_group_000", 00:16:30.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.062 "listen_address": { 00:16:30.062 "trtype": "TCP", 00:16:30.062 "adrfam": "IPv4", 00:16:30.062 "traddr": "10.0.0.2", 00:16:30.062 "trsvcid": "4420" 00:16:30.062 }, 00:16:30.062 "peer_address": { 00:16:30.062 "trtype": "TCP", 00:16:30.062 "adrfam": "IPv4", 00:16:30.062 "traddr": "10.0.0.1", 00:16:30.062 "trsvcid": "36234" 00:16:30.062 }, 00:16:30.062 "auth": { 00:16:30.062 "state": "completed", 00:16:30.062 "digest": "sha256", 00:16:30.062 "dhgroup": "ffdhe4096" 00:16:30.062 } 00:16:30.062 } 00:16:30.062 ]' 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.062 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.321 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.321 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.321 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.321 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.321 19:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.321 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:30.321 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:30.888 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.146 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.147 19:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.405 00:16:31.405 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.405 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.405 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.663 { 00:16:31.663 "cntlid": 31, 00:16:31.663 "qid": 0, 00:16:31.663 "state": "enabled", 00:16:31.663 "thread": "nvmf_tgt_poll_group_000", 00:16:31.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.663 "listen_address": { 00:16:31.663 "trtype": "TCP", 00:16:31.663 "adrfam": "IPv4", 00:16:31.663 "traddr": "10.0.0.2", 00:16:31.663 "trsvcid": "4420" 00:16:31.663 }, 00:16:31.663 "peer_address": { 00:16:31.663 "trtype": "TCP", 00:16:31.663 "adrfam": "IPv4", 00:16:31.663 "traddr": "10.0.0.1", 00:16:31.663 "trsvcid": "39150" 00:16:31.663 }, 00:16:31.663 "auth": { 00:16:31.663 "state": "completed", 00:16:31.663 "digest": "sha256", 00:16:31.663 "dhgroup": "ffdhe4096" 00:16:31.663 } 00:16:31.663 } 00:16:31.663 ]' 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.663 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:31.922 19:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.488 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.747 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.006 00:16:33.265 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.265 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.265 19:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.265 { 00:16:33.265 "cntlid": 33, 00:16:33.265 "qid": 0, 00:16:33.265 "state": "enabled", 00:16:33.265 "thread": "nvmf_tgt_poll_group_000", 00:16:33.265 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.265 "listen_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.2", 00:16:33.265 "trsvcid": "4420" 00:16:33.265 }, 00:16:33.265 "peer_address": { 00:16:33.265 "trtype": "TCP", 00:16:33.265 "adrfam": "IPv4", 00:16:33.265 "traddr": "10.0.0.1", 00:16:33.265 "trsvcid": "39170" 00:16:33.265 }, 00:16:33.265 "auth": { 00:16:33.265 "state": "completed", 00:16:33.265 "digest": "sha256", 00:16:33.265 "dhgroup": "ffdhe6144" 00:16:33.265 } 00:16:33.265 } 00:16:33.265 ]' 00:16:33.265 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.524 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.782 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:33.782 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:34.349 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.350 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.609 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.869 00:16:34.869 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.869 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.869 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.128 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.129 { 00:16:35.129 "cntlid": 35, 00:16:35.129 "qid": 0, 00:16:35.129 "state": "enabled", 00:16:35.129 "thread": "nvmf_tgt_poll_group_000", 00:16:35.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:35.129 "listen_address": { 00:16:35.129 "trtype": "TCP", 00:16:35.129 "adrfam": "IPv4", 00:16:35.129 "traddr": "10.0.0.2", 00:16:35.129 "trsvcid": "4420" 00:16:35.129 }, 00:16:35.129 "peer_address": { 00:16:35.129 "trtype": "TCP", 00:16:35.129 "adrfam": "IPv4", 00:16:35.129 "traddr": "10.0.0.1", 00:16:35.129 "trsvcid": "39198" 00:16:35.129 }, 00:16:35.129 "auth": { 00:16:35.129 "state": "completed", 00:16:35.129 "digest": "sha256", 00:16:35.129 "dhgroup": "ffdhe6144" 00:16:35.129 } 00:16:35.129 } 00:16:35.129 ]' 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.129 19:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.388 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:35.388 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:35.955 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.214 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.215 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.215 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.473 00:16:36.473 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.473 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.473 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.732 { 00:16:36.732 "cntlid": 37, 00:16:36.732 "qid": 0, 00:16:36.732 "state": "enabled", 00:16:36.732 "thread": "nvmf_tgt_poll_group_000", 00:16:36.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.732 "listen_address": { 00:16:36.732 "trtype": "TCP", 00:16:36.732 "adrfam": "IPv4", 00:16:36.732 "traddr": "10.0.0.2", 00:16:36.732 "trsvcid": "4420" 00:16:36.732 }, 00:16:36.732 "peer_address": { 00:16:36.732 "trtype": "TCP", 00:16:36.732 "adrfam": "IPv4", 00:16:36.732 "traddr": "10.0.0.1", 00:16:36.732 "trsvcid": "39226" 00:16:36.732 }, 00:16:36.732 "auth": { 00:16:36.732 "state": "completed", 00:16:36.732 "digest": "sha256", 00:16:36.732 "dhgroup": "ffdhe6144" 00:16:36.732 } 00:16:36.732 } 00:16:36.732 ]' 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.732 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.991 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.991 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:36.991 19:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.559 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.818 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.076 00:16:38.076 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.076 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.076 19:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.335 { 00:16:38.335 "cntlid": 39, 00:16:38.335 "qid": 0, 00:16:38.335 "state": "enabled", 00:16:38.335 "thread": "nvmf_tgt_poll_group_000", 00:16:38.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.335 "listen_address": { 00:16:38.335 "trtype": "TCP", 00:16:38.335 "adrfam": "IPv4", 00:16:38.335 "traddr": "10.0.0.2", 00:16:38.335 "trsvcid": "4420" 00:16:38.335 }, 00:16:38.335 "peer_address": { 00:16:38.335 "trtype": "TCP", 00:16:38.335 "adrfam": "IPv4", 00:16:38.335 "traddr": "10.0.0.1", 00:16:38.335 "trsvcid": "39262" 00:16:38.335 }, 00:16:38.335 "auth": { 00:16:38.335 "state": "completed", 00:16:38.335 "digest": "sha256", 00:16:38.335 "dhgroup": "ffdhe6144" 00:16:38.335 } 00:16:38.335 } 00:16:38.335 ]' 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.335 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.594 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.594 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.594 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.594 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.594 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.852 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:38.852 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:39.418 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.418 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:39.418 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.418 19:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.418 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.678 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.678 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.678 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.678 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.936 00:16:39.936 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.936 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.936 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.196 { 00:16:40.196 "cntlid": 41, 00:16:40.196 "qid": 0, 00:16:40.196 "state": "enabled", 00:16:40.196 "thread": "nvmf_tgt_poll_group_000", 00:16:40.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:40.196 "listen_address": { 00:16:40.196 "trtype": "TCP", 00:16:40.196 "adrfam": "IPv4", 00:16:40.196 "traddr": "10.0.0.2", 00:16:40.196 "trsvcid": "4420" 00:16:40.196 }, 00:16:40.196 "peer_address": { 00:16:40.196 "trtype": "TCP", 00:16:40.196 "adrfam": "IPv4", 00:16:40.196 "traddr": "10.0.0.1", 00:16:40.196 "trsvcid": "39294" 00:16:40.196 }, 00:16:40.196 "auth": { 00:16:40.196 "state": "completed", 00:16:40.196 "digest": "sha256", 00:16:40.196 "dhgroup": "ffdhe8192" 00:16:40.196 } 00:16:40.196 } 00:16:40.196 ]' 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.196 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.455 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.455 19:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.455 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.455 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.455 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.714 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:40.714 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:41.281 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.282 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.282 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.849 00:16:41.849 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.849 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.849 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.108 { 00:16:42.108 "cntlid": 43, 00:16:42.108 "qid": 0, 00:16:42.108 "state": "enabled", 00:16:42.108 "thread": "nvmf_tgt_poll_group_000", 00:16:42.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.108 "listen_address": { 00:16:42.108 "trtype": "TCP", 00:16:42.108 "adrfam": "IPv4", 00:16:42.108 "traddr": "10.0.0.2", 00:16:42.108 "trsvcid": "4420" 00:16:42.108 }, 00:16:42.108 "peer_address": { 00:16:42.108 "trtype": "TCP", 00:16:42.108 "adrfam": "IPv4", 00:16:42.108 "traddr": "10.0.0.1", 00:16:42.108 "trsvcid": "35948" 00:16:42.108 }, 00:16:42.108 "auth": { 00:16:42.108 "state": "completed", 00:16:42.108 "digest": "sha256", 00:16:42.108 "dhgroup": "ffdhe8192" 00:16:42.108 } 00:16:42.108 } 00:16:42.108 ]' 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.108 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.366 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:42.366 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.031 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.031 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.369 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.634 00:16:43.634 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.634 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.634 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.895 { 00:16:43.895 "cntlid": 45, 00:16:43.895 "qid": 0, 00:16:43.895 "state": "enabled", 00:16:43.895 "thread": "nvmf_tgt_poll_group_000", 00:16:43.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:43.895 "listen_address": { 00:16:43.895 "trtype": "TCP", 00:16:43.895 "adrfam": "IPv4", 00:16:43.895 "traddr": "10.0.0.2", 00:16:43.895 "trsvcid": "4420" 00:16:43.895 }, 00:16:43.895 "peer_address": { 00:16:43.895 "trtype": "TCP", 00:16:43.895 "adrfam": "IPv4", 00:16:43.895 "traddr": "10.0.0.1", 00:16:43.895 "trsvcid": "35986" 00:16:43.895 }, 00:16:43.895 "auth": { 00:16:43.895 "state": "completed", 00:16:43.895 "digest": "sha256", 00:16:43.895 "dhgroup": "ffdhe8192" 00:16:43.895 } 00:16:43.895 } 00:16:43.895 ]' 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.895 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.154 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:44.154 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:44.721 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.721 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:44.721 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.722 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.722 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.722 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.722 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.722 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.981 19:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.549 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.549 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.808 { 00:16:45.808 "cntlid": 47, 00:16:45.808 "qid": 0, 00:16:45.808 "state": "enabled", 00:16:45.808 "thread": "nvmf_tgt_poll_group_000", 00:16:45.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.808 "listen_address": { 00:16:45.808 "trtype": "TCP", 00:16:45.808 "adrfam": "IPv4", 00:16:45.808 "traddr": "10.0.0.2", 00:16:45.808 "trsvcid": "4420" 00:16:45.808 }, 00:16:45.808 "peer_address": { 00:16:45.808 "trtype": "TCP", 00:16:45.808 "adrfam": "IPv4", 00:16:45.808 "traddr": "10.0.0.1", 00:16:45.808 "trsvcid": "36018" 00:16:45.808 }, 00:16:45.808 "auth": { 00:16:45.808 "state": "completed", 00:16:45.808 "digest": "sha256", 00:16:45.808 "dhgroup": "ffdhe8192" 00:16:45.808 } 00:16:45.808 } 00:16:45.808 ]' 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.808 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.067 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:46.067 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:46.635 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.635 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.636 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.895 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.895 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.154 { 00:16:47.154 "cntlid": 49, 00:16:47.154 "qid": 0, 00:16:47.154 "state": "enabled", 00:16:47.154 "thread": "nvmf_tgt_poll_group_000", 00:16:47.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.154 "listen_address": { 00:16:47.154 "trtype": "TCP", 00:16:47.154 "adrfam": "IPv4", 00:16:47.154 "traddr": "10.0.0.2", 00:16:47.154 "trsvcid": "4420" 00:16:47.154 }, 00:16:47.154 "peer_address": { 00:16:47.154 "trtype": "TCP", 00:16:47.154 "adrfam": "IPv4", 00:16:47.154 "traddr": "10.0.0.1", 00:16:47.154 "trsvcid": "36054" 00:16:47.154 }, 00:16:47.154 "auth": { 00:16:47.154 "state": "completed", 00:16:47.154 "digest": "sha384", 00:16:47.154 "dhgroup": "null" 00:16:47.154 } 00:16:47.154 } 00:16:47.154 ]' 00:16:47.154 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.413 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.413 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.413 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:47.413 19:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.413 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.413 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.413 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.673 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:47.673 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.241 19:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.241 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.500 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.500 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.500 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.500 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.501 00:16:48.501 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.501 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.501 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.760 { 00:16:48.760 "cntlid": 51, 00:16:48.760 "qid": 0, 00:16:48.760 "state": "enabled", 00:16:48.760 "thread": "nvmf_tgt_poll_group_000", 00:16:48.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:48.760 "listen_address": { 00:16:48.760 "trtype": "TCP", 00:16:48.760 "adrfam": "IPv4", 00:16:48.760 "traddr": "10.0.0.2", 00:16:48.760 "trsvcid": "4420" 00:16:48.760 }, 00:16:48.760 "peer_address": { 00:16:48.760 "trtype": "TCP", 00:16:48.760 "adrfam": "IPv4", 00:16:48.760 "traddr": "10.0.0.1", 00:16:48.760 "trsvcid": "36080" 00:16:48.760 }, 00:16:48.760 "auth": { 00:16:48.760 "state": "completed", 00:16:48.760 "digest": "sha384", 00:16:48.760 "dhgroup": "null" 00:16:48.760 } 00:16:48.760 } 00:16:48.760 ]' 00:16:48.760 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.019 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.278 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:49.278 19:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:49.845 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.845 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:49.845 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.845 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.846 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.104 00:16:50.104 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.104 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.104 19:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.363 { 00:16:50.363 "cntlid": 53, 00:16:50.363 "qid": 0, 00:16:50.363 "state": "enabled", 00:16:50.363 "thread": "nvmf_tgt_poll_group_000", 00:16:50.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.363 "listen_address": { 00:16:50.363 "trtype": "TCP", 00:16:50.363 "adrfam": "IPv4", 00:16:50.363 "traddr": "10.0.0.2", 00:16:50.363 "trsvcid": "4420" 00:16:50.363 }, 00:16:50.363 "peer_address": { 00:16:50.363 "trtype": "TCP", 00:16:50.363 "adrfam": "IPv4", 00:16:50.363 "traddr": "10.0.0.1", 00:16:50.363 "trsvcid": "36954" 00:16:50.363 }, 00:16:50.363 "auth": { 00:16:50.363 "state": "completed", 00:16:50.363 "digest": "sha384", 00:16:50.363 "dhgroup": "null" 00:16:50.363 } 00:16:50.363 } 00:16:50.363 ]' 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.363 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:50.622 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.189 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.448 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.707 00:16:51.707 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.707 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.707 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.967 { 00:16:51.967 "cntlid": 55, 00:16:51.967 "qid": 0, 00:16:51.967 "state": "enabled", 00:16:51.967 "thread": "nvmf_tgt_poll_group_000", 00:16:51.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:51.967 "listen_address": { 00:16:51.967 "trtype": "TCP", 00:16:51.967 "adrfam": "IPv4", 00:16:51.967 "traddr": "10.0.0.2", 00:16:51.967 "trsvcid": "4420" 00:16:51.967 }, 00:16:51.967 "peer_address": { 00:16:51.967 "trtype": "TCP", 00:16:51.967 "adrfam": "IPv4", 00:16:51.967 "traddr": "10.0.0.1", 00:16:51.967 "trsvcid": "36986" 00:16:51.967 }, 00:16:51.967 "auth": { 00:16:51.967 "state": "completed", 00:16:51.967 "digest": "sha384", 00:16:51.967 "dhgroup": "null" 00:16:51.967 } 00:16:51.967 } 00:16:51.967 ]' 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.967 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.226 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:52.226 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.795 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.054 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.313 00:16:53.313 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.313 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.313 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.573 { 00:16:53.573 "cntlid": 57, 00:16:53.573 "qid": 0, 00:16:53.573 "state": "enabled", 00:16:53.573 "thread": "nvmf_tgt_poll_group_000", 00:16:53.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:53.573 "listen_address": { 00:16:53.573 "trtype": "TCP", 00:16:53.573 "adrfam": "IPv4", 00:16:53.573 "traddr": "10.0.0.2", 00:16:53.573 "trsvcid": "4420" 00:16:53.573 }, 00:16:53.573 "peer_address": { 00:16:53.573 "trtype": "TCP", 00:16:53.573 "adrfam": "IPv4", 00:16:53.573 "traddr": "10.0.0.1", 00:16:53.573 "trsvcid": "37004" 00:16:53.573 }, 00:16:53.573 "auth": { 00:16:53.573 "state": "completed", 00:16:53.573 "digest": "sha384", 00:16:53.573 "dhgroup": "ffdhe2048" 00:16:53.573 } 00:16:53.573 } 00:16:53.573 ]' 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.573 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.831 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:53.831 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.399 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.658 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.659 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.659 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.659 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.917 00:16:54.917 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.917 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.917 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.177 { 00:16:55.177 "cntlid": 59, 00:16:55.177 "qid": 0, 00:16:55.177 "state": "enabled", 00:16:55.177 "thread": "nvmf_tgt_poll_group_000", 00:16:55.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:55.177 "listen_address": { 00:16:55.177 "trtype": "TCP", 00:16:55.177 "adrfam": "IPv4", 00:16:55.177 "traddr": "10.0.0.2", 00:16:55.177 "trsvcid": "4420" 00:16:55.177 }, 00:16:55.177 "peer_address": { 00:16:55.177 "trtype": "TCP", 00:16:55.177 "adrfam": "IPv4", 00:16:55.177 "traddr": "10.0.0.1", 00:16:55.177 "trsvcid": "37042" 00:16:55.177 }, 00:16:55.177 "auth": { 00:16:55.177 "state": "completed", 00:16:55.177 "digest": "sha384", 00:16:55.177 "dhgroup": "ffdhe2048" 00:16:55.177 } 00:16:55.177 } 00:16:55.177 ]' 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.177 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.436 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:55.436 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.004 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.263 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.264 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.264 19:24:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.522 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.522 { 00:16:56.522 "cntlid": 61, 00:16:56.522 "qid": 0, 00:16:56.522 "state": "enabled", 00:16:56.522 "thread": "nvmf_tgt_poll_group_000", 00:16:56.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.522 "listen_address": { 00:16:56.522 "trtype": "TCP", 00:16:56.522 "adrfam": "IPv4", 00:16:56.522 "traddr": "10.0.0.2", 00:16:56.522 "trsvcid": "4420" 00:16:56.522 }, 00:16:56.522 "peer_address": { 00:16:56.522 "trtype": "TCP", 00:16:56.522 "adrfam": "IPv4", 00:16:56.522 "traddr": "10.0.0.1", 00:16:56.522 "trsvcid": "37076" 00:16:56.522 }, 00:16:56.522 "auth": { 00:16:56.522 "state": "completed", 00:16:56.522 "digest": "sha384", 00:16:56.522 "dhgroup": "ffdhe2048" 00:16:56.522 } 00:16:56.522 } 00:16:56.522 ]' 00:16:56.522 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.781 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.781 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.782 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:56.782 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.782 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.782 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.782 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.040 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:57.040 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.609 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.868 00:16:57.868 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.868 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.868 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.127 { 00:16:58.127 "cntlid": 63, 00:16:58.127 "qid": 0, 00:16:58.127 "state": "enabled", 00:16:58.127 "thread": "nvmf_tgt_poll_group_000", 00:16:58.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:58.127 "listen_address": { 00:16:58.127 "trtype": "TCP", 00:16:58.127 "adrfam": "IPv4", 00:16:58.127 "traddr": "10.0.0.2", 00:16:58.127 "trsvcid": "4420" 00:16:58.127 }, 00:16:58.127 "peer_address": { 00:16:58.127 "trtype": "TCP", 00:16:58.127 "adrfam": "IPv4", 00:16:58.127 "traddr": "10.0.0.1", 00:16:58.127 "trsvcid": "37102" 00:16:58.127 }, 00:16:58.127 "auth": { 00:16:58.127 "state": "completed", 00:16:58.127 "digest": "sha384", 00:16:58.127 "dhgroup": "ffdhe2048" 00:16:58.127 } 00:16:58.127 } 00:16:58.127 ]' 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.127 19:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.386 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:58.386 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.953 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:59.211 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.212 19:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.470 00:16:59.470 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.470 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.470 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.729 { 00:16:59.729 "cntlid": 65, 00:16:59.729 "qid": 0, 00:16:59.729 "state": "enabled", 00:16:59.729 "thread": "nvmf_tgt_poll_group_000", 00:16:59.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.729 "listen_address": { 00:16:59.729 "trtype": "TCP", 00:16:59.729 "adrfam": "IPv4", 00:16:59.729 "traddr": "10.0.0.2", 00:16:59.729 "trsvcid": "4420" 00:16:59.729 }, 00:16:59.729 "peer_address": { 00:16:59.729 "trtype": "TCP", 00:16:59.729 "adrfam": "IPv4", 00:16:59.729 "traddr": "10.0.0.1", 00:16:59.729 "trsvcid": "37116" 00:16:59.729 }, 00:16:59.729 "auth": { 00:16:59.729 "state": "completed", 00:16:59.729 "digest": "sha384", 00:16:59.729 "dhgroup": "ffdhe3072" 00:16:59.729 } 00:16:59.729 } 00:16:59.729 ]' 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.729 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.988 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:16:59.988 19:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.569 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.828 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.087 00:17:01.087 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.087 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.087 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.346 { 00:17:01.346 "cntlid": 67, 00:17:01.346 "qid": 0, 00:17:01.346 "state": "enabled", 00:17:01.346 "thread": "nvmf_tgt_poll_group_000", 00:17:01.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.346 "listen_address": { 00:17:01.346 "trtype": "TCP", 00:17:01.346 "adrfam": "IPv4", 00:17:01.346 "traddr": "10.0.0.2", 00:17:01.346 "trsvcid": "4420" 00:17:01.346 }, 00:17:01.346 "peer_address": { 00:17:01.346 "trtype": "TCP", 00:17:01.346 "adrfam": "IPv4", 00:17:01.346 "traddr": "10.0.0.1", 00:17:01.346 "trsvcid": "41192" 00:17:01.346 }, 00:17:01.346 "auth": { 00:17:01.346 "state": "completed", 00:17:01.346 "digest": "sha384", 00:17:01.346 "dhgroup": "ffdhe3072" 00:17:01.346 } 00:17:01.346 } 00:17:01.346 ]' 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.346 19:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.346 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.346 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.346 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.606 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:01.606 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.173 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.432 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.691 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.691 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.950 { 00:17:02.950 "cntlid": 69, 00:17:02.950 "qid": 0, 00:17:02.950 "state": "enabled", 00:17:02.950 "thread": "nvmf_tgt_poll_group_000", 00:17:02.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:02.950 "listen_address": { 00:17:02.950 "trtype": "TCP", 00:17:02.950 "adrfam": "IPv4", 00:17:02.950 "traddr": "10.0.0.2", 00:17:02.950 "trsvcid": "4420" 00:17:02.950 }, 00:17:02.950 "peer_address": { 00:17:02.950 "trtype": "TCP", 00:17:02.950 "adrfam": "IPv4", 00:17:02.950 "traddr": "10.0.0.1", 00:17:02.950 "trsvcid": "41214" 00:17:02.950 }, 00:17:02.950 "auth": { 00:17:02.950 "state": "completed", 00:17:02.950 "digest": "sha384", 00:17:02.950 "dhgroup": "ffdhe3072" 00:17:02.950 } 00:17:02.950 } 00:17:02.950 ]' 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.950 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.209 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:03.209 19:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:03.777 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.036 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:04.295 00:17:04.295 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.295 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.295 19:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.295 { 00:17:04.295 "cntlid": 71, 00:17:04.295 "qid": 0, 00:17:04.295 "state": "enabled", 00:17:04.295 "thread": "nvmf_tgt_poll_group_000", 00:17:04.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.295 "listen_address": { 00:17:04.295 "trtype": "TCP", 00:17:04.295 "adrfam": "IPv4", 00:17:04.295 "traddr": "10.0.0.2", 00:17:04.295 "trsvcid": "4420" 00:17:04.295 }, 00:17:04.295 "peer_address": { 00:17:04.295 "trtype": "TCP", 00:17:04.295 "adrfam": "IPv4", 00:17:04.295 "traddr": "10.0.0.1", 00:17:04.295 "trsvcid": "41242" 00:17:04.295 }, 00:17:04.295 "auth": { 00:17:04.295 "state": "completed", 00:17:04.295 "digest": "sha384", 00:17:04.295 "dhgroup": "ffdhe3072" 00:17:04.295 } 00:17:04.295 } 00:17:04.295 ]' 00:17:04.295 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.554 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.813 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:04.813 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.381 19:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.381 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.640 00:17:05.640 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.640 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.640 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.899 { 00:17:05.899 "cntlid": 73, 00:17:05.899 "qid": 0, 00:17:05.899 "state": "enabled", 00:17:05.899 "thread": "nvmf_tgt_poll_group_000", 00:17:05.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.899 "listen_address": { 00:17:05.899 "trtype": "TCP", 00:17:05.899 "adrfam": "IPv4", 00:17:05.899 "traddr": "10.0.0.2", 00:17:05.899 "trsvcid": "4420" 00:17:05.899 }, 00:17:05.899 "peer_address": { 00:17:05.899 "trtype": "TCP", 00:17:05.899 "adrfam": "IPv4", 00:17:05.899 "traddr": "10.0.0.1", 00:17:05.899 "trsvcid": "41258" 00:17:05.899 }, 00:17:05.899 "auth": { 00:17:05.899 "state": "completed", 00:17:05.899 "digest": "sha384", 00:17:05.899 "dhgroup": "ffdhe4096" 00:17:05.899 } 00:17:05.899 } 00:17:05.899 ]' 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.899 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:06.157 19:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:06.724 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.983 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.244 00:17:07.244 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.244 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.244 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.503 { 00:17:07.503 "cntlid": 75, 00:17:07.503 "qid": 0, 00:17:07.503 "state": "enabled", 00:17:07.503 "thread": "nvmf_tgt_poll_group_000", 00:17:07.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.503 "listen_address": { 00:17:07.503 "trtype": "TCP", 00:17:07.503 "adrfam": "IPv4", 00:17:07.503 "traddr": "10.0.0.2", 00:17:07.503 "trsvcid": "4420" 00:17:07.503 }, 00:17:07.503 "peer_address": { 00:17:07.503 "trtype": "TCP", 00:17:07.503 "adrfam": "IPv4", 00:17:07.503 "traddr": "10.0.0.1", 00:17:07.503 "trsvcid": "41266" 00:17:07.503 }, 00:17:07.503 "auth": { 00:17:07.503 "state": "completed", 00:17:07.503 "digest": "sha384", 00:17:07.503 "dhgroup": "ffdhe4096" 00:17:07.503 } 00:17:07.503 } 00:17:07.503 ]' 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.503 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.504 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:07.762 19:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.331 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.590 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.849 00:17:08.849 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.849 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.849 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.108 { 00:17:09.108 "cntlid": 77, 00:17:09.108 "qid": 0, 00:17:09.108 "state": "enabled", 00:17:09.108 "thread": "nvmf_tgt_poll_group_000", 00:17:09.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.108 "listen_address": { 00:17:09.108 "trtype": "TCP", 00:17:09.108 "adrfam": "IPv4", 00:17:09.108 "traddr": "10.0.0.2", 00:17:09.108 "trsvcid": "4420" 00:17:09.108 }, 00:17:09.108 "peer_address": { 00:17:09.108 "trtype": "TCP", 00:17:09.108 "adrfam": "IPv4", 00:17:09.108 "traddr": "10.0.0.1", 00:17:09.108 "trsvcid": "41280" 00:17:09.108 }, 00:17:09.108 "auth": { 00:17:09.108 "state": "completed", 00:17:09.108 "digest": "sha384", 00:17:09.108 "dhgroup": "ffdhe4096" 00:17:09.108 } 00:17:09.108 } 00:17:09.108 ]' 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.108 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.367 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.367 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.367 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.367 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:09.367 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.935 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.194 19:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.453 00:17:10.453 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.453 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.453 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.713 { 00:17:10.713 "cntlid": 79, 00:17:10.713 "qid": 0, 00:17:10.713 "state": "enabled", 00:17:10.713 "thread": "nvmf_tgt_poll_group_000", 00:17:10.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.713 "listen_address": { 00:17:10.713 "trtype": "TCP", 00:17:10.713 "adrfam": "IPv4", 00:17:10.713 "traddr": "10.0.0.2", 00:17:10.713 "trsvcid": "4420" 00:17:10.713 }, 00:17:10.713 "peer_address": { 00:17:10.713 "trtype": "TCP", 00:17:10.713 "adrfam": "IPv4", 00:17:10.713 "traddr": "10.0.0.1", 00:17:10.713 "trsvcid": "37038" 00:17:10.713 }, 00:17:10.713 "auth": { 00:17:10.713 "state": "completed", 00:17:10.713 "digest": "sha384", 00:17:10.713 "dhgroup": "ffdhe4096" 00:17:10.713 } 00:17:10.713 } 00:17:10.713 ]' 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.713 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.972 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:10.972 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.539 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.798 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.057 00:17:12.057 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.057 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.057 19:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.316 { 00:17:12.316 "cntlid": 81, 00:17:12.316 "qid": 0, 00:17:12.316 "state": "enabled", 00:17:12.316 "thread": "nvmf_tgt_poll_group_000", 00:17:12.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.316 "listen_address": { 00:17:12.316 "trtype": "TCP", 00:17:12.316 "adrfam": "IPv4", 00:17:12.316 "traddr": "10.0.0.2", 00:17:12.316 "trsvcid": "4420" 00:17:12.316 }, 00:17:12.316 "peer_address": { 00:17:12.316 "trtype": "TCP", 00:17:12.316 "adrfam": "IPv4", 00:17:12.316 "traddr": "10.0.0.1", 00:17:12.316 "trsvcid": "37058" 00:17:12.316 }, 00:17:12.316 "auth": { 00:17:12.316 "state": "completed", 00:17:12.316 "digest": "sha384", 00:17:12.316 "dhgroup": "ffdhe6144" 00:17:12.316 } 00:17:12.316 } 00:17:12.316 ]' 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.316 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.575 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.575 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.575 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.575 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.575 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.834 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:12.834 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.402 19:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.402 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.971 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.971 { 00:17:13.971 "cntlid": 83, 00:17:13.971 "qid": 0, 00:17:13.971 "state": "enabled", 00:17:13.971 "thread": "nvmf_tgt_poll_group_000", 00:17:13.971 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.971 "listen_address": { 00:17:13.971 "trtype": "TCP", 00:17:13.971 "adrfam": "IPv4", 00:17:13.971 "traddr": "10.0.0.2", 00:17:13.971 "trsvcid": "4420" 00:17:13.971 }, 00:17:13.971 "peer_address": { 00:17:13.971 "trtype": "TCP", 00:17:13.971 "adrfam": "IPv4", 00:17:13.971 "traddr": "10.0.0.1", 00:17:13.971 "trsvcid": "37074" 00:17:13.971 }, 00:17:13.971 "auth": { 00:17:13.971 "state": "completed", 00:17:13.971 "digest": "sha384", 00:17:13.971 "dhgroup": "ffdhe6144" 00:17:13.971 } 00:17:13.971 } 00:17:13.971 ]' 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.971 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.230 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.230 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.230 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.230 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.230 19:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.489 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:14.489 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.058 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.059 19:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.627 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.627 { 00:17:15.627 "cntlid": 85, 00:17:15.627 "qid": 0, 00:17:15.627 "state": "enabled", 00:17:15.627 "thread": "nvmf_tgt_poll_group_000", 00:17:15.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.627 "listen_address": { 00:17:15.627 "trtype": "TCP", 00:17:15.627 "adrfam": "IPv4", 00:17:15.627 "traddr": "10.0.0.2", 00:17:15.627 "trsvcid": "4420" 00:17:15.627 }, 00:17:15.627 "peer_address": { 00:17:15.627 "trtype": "TCP", 00:17:15.627 "adrfam": "IPv4", 00:17:15.627 "traddr": "10.0.0.1", 00:17:15.627 "trsvcid": "37088" 00:17:15.627 }, 00:17:15.627 "auth": { 00:17:15.627 "state": "completed", 00:17:15.627 "digest": "sha384", 00:17:15.627 "dhgroup": "ffdhe6144" 00:17:15.627 } 00:17:15.627 } 00:17:15.627 ]' 00:17:15.627 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.886 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.145 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:16.145 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:16.713 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.714 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.281 00:17:17.281 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.281 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.281 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.281 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.540 { 00:17:17.540 "cntlid": 87, 00:17:17.540 "qid": 0, 00:17:17.540 "state": "enabled", 00:17:17.540 "thread": "nvmf_tgt_poll_group_000", 00:17:17.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:17.540 "listen_address": { 00:17:17.540 "trtype": "TCP", 00:17:17.540 "adrfam": "IPv4", 00:17:17.540 "traddr": "10.0.0.2", 00:17:17.540 "trsvcid": "4420" 00:17:17.540 }, 00:17:17.540 "peer_address": { 00:17:17.540 "trtype": "TCP", 00:17:17.540 "adrfam": "IPv4", 00:17:17.540 "traddr": "10.0.0.1", 00:17:17.540 "trsvcid": "37114" 00:17:17.540 }, 00:17:17.540 "auth": { 00:17:17.540 "state": "completed", 00:17:17.540 "digest": "sha384", 00:17:17.540 "dhgroup": "ffdhe6144" 00:17:17.540 } 00:17:17.540 } 00:17:17.540 ]' 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.540 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.799 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:17.799 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:18.366 19:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.366 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.626 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.194 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.194 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.453 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.453 { 00:17:19.453 "cntlid": 89, 00:17:19.453 "qid": 0, 00:17:19.453 "state": "enabled", 00:17:19.453 "thread": "nvmf_tgt_poll_group_000", 00:17:19.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:19.453 "listen_address": { 00:17:19.453 "trtype": "TCP", 00:17:19.453 "adrfam": "IPv4", 00:17:19.453 "traddr": "10.0.0.2", 00:17:19.453 "trsvcid": "4420" 00:17:19.453 }, 00:17:19.453 "peer_address": { 00:17:19.453 "trtype": "TCP", 00:17:19.453 "adrfam": "IPv4", 00:17:19.453 "traddr": "10.0.0.1", 00:17:19.453 "trsvcid": "37138" 00:17:19.453 }, 00:17:19.453 "auth": { 00:17:19.453 "state": "completed", 00:17:19.453 "digest": "sha384", 00:17:19.453 "dhgroup": "ffdhe8192" 00:17:19.453 } 00:17:19.453 } 00:17:19.453 ]' 00:17:19.453 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.453 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.712 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:19.712 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.284 19:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.284 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.611 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.611 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.611 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.611 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.943 00:17:20.943 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.943 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.943 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.201 { 00:17:21.201 "cntlid": 91, 00:17:21.201 "qid": 0, 00:17:21.201 "state": "enabled", 00:17:21.201 "thread": "nvmf_tgt_poll_group_000", 00:17:21.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.201 "listen_address": { 00:17:21.201 "trtype": "TCP", 00:17:21.201 "adrfam": "IPv4", 00:17:21.201 "traddr": "10.0.0.2", 00:17:21.201 "trsvcid": "4420" 00:17:21.201 }, 00:17:21.201 "peer_address": { 00:17:21.201 "trtype": "TCP", 00:17:21.201 "adrfam": "IPv4", 00:17:21.201 "traddr": "10.0.0.1", 00:17:21.201 "trsvcid": "59152" 00:17:21.201 }, 00:17:21.201 "auth": { 00:17:21.201 "state": "completed", 00:17:21.201 "digest": "sha384", 00:17:21.201 "dhgroup": "ffdhe8192" 00:17:21.201 } 00:17:21.201 } 00:17:21.201 ]' 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.201 19:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.460 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:21.460 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.028 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.286 19:24:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.854 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.854 { 00:17:22.854 "cntlid": 93, 00:17:22.854 "qid": 0, 00:17:22.854 "state": "enabled", 00:17:22.854 "thread": "nvmf_tgt_poll_group_000", 00:17:22.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:22.854 "listen_address": { 00:17:22.854 "trtype": "TCP", 00:17:22.854 "adrfam": "IPv4", 00:17:22.854 "traddr": "10.0.0.2", 00:17:22.854 "trsvcid": "4420" 00:17:22.854 }, 00:17:22.854 "peer_address": { 00:17:22.854 "trtype": "TCP", 00:17:22.854 "adrfam": "IPv4", 00:17:22.854 "traddr": "10.0.0.1", 00:17:22.854 "trsvcid": "59196" 00:17:22.854 }, 00:17:22.854 "auth": { 00:17:22.854 "state": "completed", 00:17:22.854 "digest": "sha384", 00:17:22.854 "dhgroup": "ffdhe8192" 00:17:22.854 } 00:17:22.854 } 00:17:22.854 ]' 00:17:22.854 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.113 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.372 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:23.372 19:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.939 19:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:24.507 00:17:24.507 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.507 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.507 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.766 { 00:17:24.766 "cntlid": 95, 00:17:24.766 "qid": 0, 00:17:24.766 "state": "enabled", 00:17:24.766 "thread": "nvmf_tgt_poll_group_000", 00:17:24.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.766 "listen_address": { 00:17:24.766 "trtype": "TCP", 00:17:24.766 "adrfam": "IPv4", 00:17:24.766 "traddr": "10.0.0.2", 00:17:24.766 "trsvcid": "4420" 00:17:24.766 }, 00:17:24.766 "peer_address": { 00:17:24.766 "trtype": "TCP", 00:17:24.766 "adrfam": "IPv4", 00:17:24.766 "traddr": "10.0.0.1", 00:17:24.766 "trsvcid": "59224" 00:17:24.766 }, 00:17:24.766 "auth": { 00:17:24.766 "state": "completed", 00:17:24.766 "digest": "sha384", 00:17:24.766 "dhgroup": "ffdhe8192" 00:17:24.766 } 00:17:24.766 } 00:17:24.766 ]' 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.766 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.024 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:25.025 19:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.592 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.851 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.110 00:17:26.110 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.110 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.110 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.368 { 00:17:26.368 "cntlid": 97, 00:17:26.368 "qid": 0, 00:17:26.368 "state": "enabled", 00:17:26.368 "thread": "nvmf_tgt_poll_group_000", 00:17:26.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.368 "listen_address": { 00:17:26.368 "trtype": "TCP", 00:17:26.368 "adrfam": "IPv4", 00:17:26.368 "traddr": "10.0.0.2", 00:17:26.368 "trsvcid": "4420" 00:17:26.368 }, 00:17:26.368 "peer_address": { 00:17:26.368 "trtype": "TCP", 00:17:26.368 "adrfam": "IPv4", 00:17:26.368 "traddr": "10.0.0.1", 00:17:26.368 "trsvcid": "59250" 00:17:26.368 }, 00:17:26.368 "auth": { 00:17:26.368 "state": "completed", 00:17:26.368 "digest": "sha512", 00:17:26.368 "dhgroup": "null" 00:17:26.368 } 00:17:26.368 } 00:17:26.368 ]' 00:17:26.368 19:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.368 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.627 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:26.627 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.194 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.452 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:27.710 00:17:27.710 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.710 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.710 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.970 { 00:17:27.970 "cntlid": 99, 00:17:27.970 "qid": 0, 00:17:27.970 "state": "enabled", 00:17:27.970 "thread": "nvmf_tgt_poll_group_000", 00:17:27.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:27.970 "listen_address": { 00:17:27.970 "trtype": "TCP", 00:17:27.970 "adrfam": "IPv4", 00:17:27.970 "traddr": "10.0.0.2", 00:17:27.970 "trsvcid": "4420" 00:17:27.970 }, 00:17:27.970 "peer_address": { 00:17:27.970 "trtype": "TCP", 00:17:27.970 "adrfam": "IPv4", 00:17:27.970 "traddr": "10.0.0.1", 00:17:27.970 "trsvcid": "59296" 00:17:27.970 }, 00:17:27.970 "auth": { 00:17:27.970 "state": "completed", 00:17:27.970 "digest": "sha512", 00:17:27.970 "dhgroup": "null" 00:17:27.970 } 00:17:27.970 } 00:17:27.970 ]' 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.970 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.229 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:28.229 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.797 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:28.797 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:29.056 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.057 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:29.315 00:17:29.315 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.315 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.315 19:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.574 { 00:17:29.574 "cntlid": 101, 00:17:29.574 "qid": 0, 00:17:29.574 "state": "enabled", 00:17:29.574 "thread": "nvmf_tgt_poll_group_000", 00:17:29.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.574 "listen_address": { 00:17:29.574 "trtype": "TCP", 00:17:29.574 "adrfam": "IPv4", 00:17:29.574 "traddr": "10.0.0.2", 00:17:29.574 "trsvcid": "4420" 00:17:29.574 }, 00:17:29.574 "peer_address": { 00:17:29.574 "trtype": "TCP", 00:17:29.574 "adrfam": "IPv4", 00:17:29.574 "traddr": "10.0.0.1", 00:17:29.574 "trsvcid": "59332" 00:17:29.574 }, 00:17:29.574 "auth": { 00:17:29.574 "state": "completed", 00:17:29.574 "digest": "sha512", 00:17:29.574 "dhgroup": "null" 00:17:29.574 } 00:17:29.574 } 00:17:29.574 ]' 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.574 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.833 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:29.833 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:30.399 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.399 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.658 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.916 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.916 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.916 { 00:17:30.916 "cntlid": 103, 00:17:30.917 "qid": 0, 00:17:30.917 "state": "enabled", 00:17:30.917 "thread": "nvmf_tgt_poll_group_000", 00:17:30.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:30.917 "listen_address": { 00:17:30.917 "trtype": "TCP", 00:17:30.917 "adrfam": "IPv4", 00:17:30.917 "traddr": "10.0.0.2", 00:17:30.917 "trsvcid": "4420" 00:17:30.917 }, 00:17:30.917 "peer_address": { 00:17:30.917 "trtype": "TCP", 00:17:30.917 "adrfam": "IPv4", 00:17:30.917 "traddr": "10.0.0.1", 00:17:30.917 "trsvcid": "53536" 00:17:30.917 }, 00:17:30.917 "auth": { 00:17:30.917 "state": "completed", 00:17:30.917 "digest": "sha512", 00:17:30.917 "dhgroup": "null" 00:17:30.917 } 00:17:30.917 } 00:17:30.917 ]' 00:17:30.917 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.175 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.434 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:31.434 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.002 19:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.261 00:17:32.261 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.261 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.261 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.520 { 00:17:32.520 "cntlid": 105, 00:17:32.520 "qid": 0, 00:17:32.520 "state": "enabled", 00:17:32.520 "thread": "nvmf_tgt_poll_group_000", 00:17:32.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:32.520 "listen_address": { 00:17:32.520 "trtype": "TCP", 00:17:32.520 "adrfam": "IPv4", 00:17:32.520 "traddr": "10.0.0.2", 00:17:32.520 "trsvcid": "4420" 00:17:32.520 }, 00:17:32.520 "peer_address": { 00:17:32.520 "trtype": "TCP", 00:17:32.520 "adrfam": "IPv4", 00:17:32.520 "traddr": "10.0.0.1", 00:17:32.520 "trsvcid": "53564" 00:17:32.520 }, 00:17:32.520 "auth": { 00:17:32.520 "state": "completed", 00:17:32.520 "digest": "sha512", 00:17:32.520 "dhgroup": "ffdhe2048" 00:17:32.520 } 00:17:32.520 } 00:17:32.520 ]' 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.520 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.779 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.779 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.779 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.780 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.780 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.780 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:32.780 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.347 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.606 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.607 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.866 00:17:33.866 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.866 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.866 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.126 { 00:17:34.126 "cntlid": 107, 00:17:34.126 "qid": 0, 00:17:34.126 "state": "enabled", 00:17:34.126 "thread": "nvmf_tgt_poll_group_000", 00:17:34.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:34.126 "listen_address": { 00:17:34.126 "trtype": "TCP", 00:17:34.126 "adrfam": "IPv4", 00:17:34.126 "traddr": "10.0.0.2", 00:17:34.126 "trsvcid": "4420" 00:17:34.126 }, 00:17:34.126 "peer_address": { 00:17:34.126 "trtype": "TCP", 00:17:34.126 "adrfam": "IPv4", 00:17:34.126 "traddr": "10.0.0.1", 00:17:34.126 "trsvcid": "53580" 00:17:34.126 }, 00:17:34.126 "auth": { 00:17:34.126 "state": "completed", 00:17:34.126 "digest": "sha512", 00:17:34.126 "dhgroup": "ffdhe2048" 00:17:34.126 } 00:17:34.126 } 00:17:34.126 ]' 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.126 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.384 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.384 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.384 19:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.384 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:34.384 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.952 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.211 19:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.471 00:17:35.471 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.471 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.471 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.730 { 00:17:35.730 "cntlid": 109, 00:17:35.730 "qid": 0, 00:17:35.730 "state": "enabled", 00:17:35.730 "thread": "nvmf_tgt_poll_group_000", 00:17:35.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.730 "listen_address": { 00:17:35.730 "trtype": "TCP", 00:17:35.730 "adrfam": "IPv4", 00:17:35.730 "traddr": "10.0.0.2", 00:17:35.730 "trsvcid": "4420" 00:17:35.730 }, 00:17:35.730 "peer_address": { 00:17:35.730 "trtype": "TCP", 00:17:35.730 "adrfam": "IPv4", 00:17:35.730 "traddr": "10.0.0.1", 00:17:35.730 "trsvcid": "53616" 00:17:35.730 }, 00:17:35.730 "auth": { 00:17:35.730 "state": "completed", 00:17:35.730 "digest": "sha512", 00:17:35.730 "dhgroup": "ffdhe2048" 00:17:35.730 } 00:17:35.730 } 00:17:35.730 ]' 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.730 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.990 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:35.990 19:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.558 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.817 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.076 00:17:37.076 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.076 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.076 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.334 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.334 { 00:17:37.334 "cntlid": 111, 00:17:37.334 "qid": 0, 00:17:37.334 "state": "enabled", 00:17:37.334 "thread": "nvmf_tgt_poll_group_000", 00:17:37.334 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.334 "listen_address": { 00:17:37.334 "trtype": "TCP", 00:17:37.334 "adrfam": "IPv4", 00:17:37.335 "traddr": "10.0.0.2", 00:17:37.335 "trsvcid": "4420" 00:17:37.335 }, 00:17:37.335 "peer_address": { 00:17:37.335 "trtype": "TCP", 00:17:37.335 "adrfam": "IPv4", 00:17:37.335 "traddr": "10.0.0.1", 00:17:37.335 "trsvcid": "53632" 00:17:37.335 }, 00:17:37.335 "auth": { 00:17:37.335 "state": "completed", 00:17:37.335 "digest": "sha512", 00:17:37.335 "dhgroup": "ffdhe2048" 00:17:37.335 } 00:17:37.335 } 00:17:37.335 ]' 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.335 19:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.593 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:37.593 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.161 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.419 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.678 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.678 { 00:17:38.678 "cntlid": 113, 00:17:38.678 "qid": 0, 00:17:38.678 "state": "enabled", 00:17:38.678 "thread": "nvmf_tgt_poll_group_000", 00:17:38.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.678 "listen_address": { 00:17:38.678 "trtype": "TCP", 00:17:38.678 "adrfam": "IPv4", 00:17:38.678 "traddr": "10.0.0.2", 00:17:38.678 "trsvcid": "4420" 00:17:38.678 }, 00:17:38.678 "peer_address": { 00:17:38.678 "trtype": "TCP", 00:17:38.678 "adrfam": "IPv4", 00:17:38.678 "traddr": "10.0.0.1", 00:17:38.678 "trsvcid": "53664" 00:17:38.678 }, 00:17:38.678 "auth": { 00:17:38.678 "state": "completed", 00:17:38.678 "digest": "sha512", 00:17:38.678 "dhgroup": "ffdhe3072" 00:17:38.678 } 00:17:38.678 } 00:17:38.678 ]' 00:17:38.678 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.937 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.196 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:39.196 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.764 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.023 00:17:40.023 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.023 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.023 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.282 { 00:17:40.282 "cntlid": 115, 00:17:40.282 "qid": 0, 00:17:40.282 "state": "enabled", 00:17:40.282 "thread": "nvmf_tgt_poll_group_000", 00:17:40.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.282 "listen_address": { 00:17:40.282 "trtype": "TCP", 00:17:40.282 "adrfam": "IPv4", 00:17:40.282 "traddr": "10.0.0.2", 00:17:40.282 "trsvcid": "4420" 00:17:40.282 }, 00:17:40.282 "peer_address": { 00:17:40.282 "trtype": "TCP", 00:17:40.282 "adrfam": "IPv4", 00:17:40.282 "traddr": "10.0.0.1", 00:17:40.282 "trsvcid": "33638" 00:17:40.282 }, 00:17:40.282 "auth": { 00:17:40.282 "state": "completed", 00:17:40.282 "digest": "sha512", 00:17:40.282 "dhgroup": "ffdhe3072" 00:17:40.282 } 00:17:40.282 } 00:17:40.282 ]' 00:17:40.282 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.282 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.282 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.282 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:40.541 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:41.108 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.108 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.108 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.109 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.109 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.109 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.109 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.109 19:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:41.367 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:41.367 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.367 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.367 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:41.367 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.368 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.627 00:17:41.627 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.627 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.627 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.886 { 00:17:41.886 "cntlid": 117, 00:17:41.886 "qid": 0, 00:17:41.886 "state": "enabled", 00:17:41.886 "thread": "nvmf_tgt_poll_group_000", 00:17:41.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:41.886 "listen_address": { 00:17:41.886 "trtype": "TCP", 00:17:41.886 "adrfam": "IPv4", 00:17:41.886 "traddr": "10.0.0.2", 00:17:41.886 "trsvcid": "4420" 00:17:41.886 }, 00:17:41.886 "peer_address": { 00:17:41.886 "trtype": "TCP", 00:17:41.886 "adrfam": "IPv4", 00:17:41.886 "traddr": "10.0.0.1", 00:17:41.886 "trsvcid": "33670" 00:17:41.886 }, 00:17:41.886 "auth": { 00:17:41.886 "state": "completed", 00:17:41.886 "digest": "sha512", 00:17:41.886 "dhgroup": "ffdhe3072" 00:17:41.886 } 00:17:41.886 } 00:17:41.886 ]' 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.886 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.144 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:42.144 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.712 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:42.972 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.231 00:17:43.231 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.231 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.231 19:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.489 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.489 { 00:17:43.489 "cntlid": 119, 00:17:43.489 "qid": 0, 00:17:43.489 "state": "enabled", 00:17:43.489 "thread": "nvmf_tgt_poll_group_000", 00:17:43.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.489 "listen_address": { 00:17:43.489 "trtype": "TCP", 00:17:43.489 "adrfam": "IPv4", 00:17:43.489 "traddr": "10.0.0.2", 00:17:43.489 "trsvcid": "4420" 00:17:43.489 }, 00:17:43.489 "peer_address": { 00:17:43.489 "trtype": "TCP", 00:17:43.489 "adrfam": "IPv4", 00:17:43.489 "traddr": "10.0.0.1", 00:17:43.489 "trsvcid": "33698" 00:17:43.489 }, 00:17:43.489 "auth": { 00:17:43.490 "state": "completed", 00:17:43.490 "digest": "sha512", 00:17:43.490 "dhgroup": "ffdhe3072" 00:17:43.490 } 00:17:43.490 } 00:17:43.490 ]' 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.490 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.748 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:43.748 19:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.315 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.574 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:44.832 00:17:44.832 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.832 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.832 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.091 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.091 { 00:17:45.091 "cntlid": 121, 00:17:45.091 "qid": 0, 00:17:45.092 "state": "enabled", 00:17:45.092 "thread": "nvmf_tgt_poll_group_000", 00:17:45.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:45.092 "listen_address": { 00:17:45.092 "trtype": "TCP", 00:17:45.092 "adrfam": "IPv4", 00:17:45.092 "traddr": "10.0.0.2", 00:17:45.092 "trsvcid": "4420" 00:17:45.092 }, 00:17:45.092 "peer_address": { 00:17:45.092 "trtype": "TCP", 00:17:45.092 "adrfam": "IPv4", 00:17:45.092 "traddr": "10.0.0.1", 00:17:45.092 "trsvcid": "33734" 00:17:45.092 }, 00:17:45.092 "auth": { 00:17:45.092 "state": "completed", 00:17:45.092 "digest": "sha512", 00:17:45.092 "dhgroup": "ffdhe4096" 00:17:45.092 } 00:17:45.092 } 00:17:45.092 ]' 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.092 19:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.351 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:45.351 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:45.917 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.918 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.176 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:46.176 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.176 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.176 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.177 19:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:46.435 00:17:46.435 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.435 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.435 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.694 { 00:17:46.694 "cntlid": 123, 00:17:46.694 "qid": 0, 00:17:46.694 "state": "enabled", 00:17:46.694 "thread": "nvmf_tgt_poll_group_000", 00:17:46.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.694 "listen_address": { 00:17:46.694 "trtype": "TCP", 00:17:46.694 "adrfam": "IPv4", 00:17:46.694 "traddr": "10.0.0.2", 00:17:46.694 "trsvcid": "4420" 00:17:46.694 }, 00:17:46.694 "peer_address": { 00:17:46.694 "trtype": "TCP", 00:17:46.694 "adrfam": "IPv4", 00:17:46.694 "traddr": "10.0.0.1", 00:17:46.694 "trsvcid": "33760" 00:17:46.694 }, 00:17:46.694 "auth": { 00:17:46.694 "state": "completed", 00:17:46.694 "digest": "sha512", 00:17:46.694 "dhgroup": "ffdhe4096" 00:17:46.694 } 00:17:46.694 } 00:17:46.694 ]' 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:46.694 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.954 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.954 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.954 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.954 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:46.954 19:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.522 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:47.779 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:48.037 00:17:48.037 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.037 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.037 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.297 { 00:17:48.297 "cntlid": 125, 00:17:48.297 "qid": 0, 00:17:48.297 "state": "enabled", 00:17:48.297 "thread": "nvmf_tgt_poll_group_000", 00:17:48.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.297 "listen_address": { 00:17:48.297 "trtype": "TCP", 00:17:48.297 "adrfam": "IPv4", 00:17:48.297 "traddr": "10.0.0.2", 00:17:48.297 "trsvcid": "4420" 00:17:48.297 }, 00:17:48.297 "peer_address": { 00:17:48.297 "trtype": "TCP", 00:17:48.297 "adrfam": "IPv4", 00:17:48.297 "traddr": "10.0.0.1", 00:17:48.297 "trsvcid": "33770" 00:17:48.297 }, 00:17:48.297 "auth": { 00:17:48.297 "state": "completed", 00:17:48.297 "digest": "sha512", 00:17:48.297 "dhgroup": "ffdhe4096" 00:17:48.297 } 00:17:48.297 } 00:17:48.297 ]' 00:17:48.297 19:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.297 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.556 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.556 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:48.556 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.124 19:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.383 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:49.642 00:17:49.642 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.642 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.642 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.901 { 00:17:49.901 "cntlid": 127, 00:17:49.901 "qid": 0, 00:17:49.901 "state": "enabled", 00:17:49.901 "thread": "nvmf_tgt_poll_group_000", 00:17:49.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:49.901 "listen_address": { 00:17:49.901 "trtype": "TCP", 00:17:49.901 "adrfam": "IPv4", 00:17:49.901 "traddr": "10.0.0.2", 00:17:49.901 "trsvcid": "4420" 00:17:49.901 }, 00:17:49.901 "peer_address": { 00:17:49.901 "trtype": "TCP", 00:17:49.901 "adrfam": "IPv4", 00:17:49.901 "traddr": "10.0.0.1", 00:17:49.901 "trsvcid": "33792" 00:17:49.901 }, 00:17:49.901 "auth": { 00:17:49.901 "state": "completed", 00:17:49.901 "digest": "sha512", 00:17:49.901 "dhgroup": "ffdhe4096" 00:17:49.901 } 00:17:49.901 } 00:17:49.901 ]' 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.901 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.160 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:50.160 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.728 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.987 19:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:51.246 00:17:51.246 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.246 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.246 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.505 { 00:17:51.505 "cntlid": 129, 00:17:51.505 "qid": 0, 00:17:51.505 "state": "enabled", 00:17:51.505 "thread": "nvmf_tgt_poll_group_000", 00:17:51.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.505 "listen_address": { 00:17:51.505 "trtype": "TCP", 00:17:51.505 "adrfam": "IPv4", 00:17:51.505 "traddr": "10.0.0.2", 00:17:51.505 "trsvcid": "4420" 00:17:51.505 }, 00:17:51.505 "peer_address": { 00:17:51.505 "trtype": "TCP", 00:17:51.505 "adrfam": "IPv4", 00:17:51.505 "traddr": "10.0.0.1", 00:17:51.505 "trsvcid": "40666" 00:17:51.505 }, 00:17:51.505 "auth": { 00:17:51.505 "state": "completed", 00:17:51.505 "digest": "sha512", 00:17:51.505 "dhgroup": "ffdhe6144" 00:17:51.505 } 00:17:51.505 } 00:17:51.505 ]' 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.505 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.764 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.764 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.764 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.764 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.764 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.023 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:52.023 19:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.590 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.591 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.591 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.591 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.158 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.158 { 00:17:53.158 "cntlid": 131, 00:17:53.158 "qid": 0, 00:17:53.158 "state": "enabled", 00:17:53.158 "thread": "nvmf_tgt_poll_group_000", 00:17:53.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:53.158 "listen_address": { 00:17:53.158 "trtype": "TCP", 00:17:53.158 "adrfam": "IPv4", 00:17:53.158 "traddr": "10.0.0.2", 00:17:53.158 "trsvcid": "4420" 00:17:53.158 }, 00:17:53.158 "peer_address": { 00:17:53.158 "trtype": "TCP", 00:17:53.158 "adrfam": "IPv4", 00:17:53.158 "traddr": "10.0.0.1", 00:17:53.158 "trsvcid": "40706" 00:17:53.158 }, 00:17:53.158 "auth": { 00:17:53.158 "state": "completed", 00:17:53.158 "digest": "sha512", 00:17:53.158 "dhgroup": "ffdhe6144" 00:17:53.158 } 00:17:53.158 } 00:17:53.158 ]' 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:53.158 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.420 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.420 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.420 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.420 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.420 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.679 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:53.679 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:17:54.246 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.246 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.247 19:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.247 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.247 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.812 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.812 { 00:17:54.812 "cntlid": 133, 00:17:54.812 "qid": 0, 00:17:54.812 "state": "enabled", 00:17:54.812 "thread": "nvmf_tgt_poll_group_000", 00:17:54.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:54.812 "listen_address": { 00:17:54.812 "trtype": "TCP", 00:17:54.812 "adrfam": "IPv4", 00:17:54.812 "traddr": "10.0.0.2", 00:17:54.812 "trsvcid": "4420" 00:17:54.812 }, 00:17:54.812 "peer_address": { 00:17:54.812 "trtype": "TCP", 00:17:54.812 "adrfam": "IPv4", 00:17:54.812 "traddr": "10.0.0.1", 00:17:54.812 "trsvcid": "40736" 00:17:54.812 }, 00:17:54.812 "auth": { 00:17:54.812 "state": "completed", 00:17:54.812 "digest": "sha512", 00:17:54.812 "dhgroup": "ffdhe6144" 00:17:54.812 } 00:17:54.812 } 00:17:54.812 ]' 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.812 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.071 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:55.071 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.071 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.071 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.071 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.329 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:55.329 19:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:55.896 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:56.463 00:17:56.463 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.463 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.463 19:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.463 { 00:17:56.463 "cntlid": 135, 00:17:56.463 "qid": 0, 00:17:56.463 "state": "enabled", 00:17:56.463 "thread": "nvmf_tgt_poll_group_000", 00:17:56.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:56.463 "listen_address": { 00:17:56.463 "trtype": "TCP", 00:17:56.463 "adrfam": "IPv4", 00:17:56.463 "traddr": "10.0.0.2", 00:17:56.463 "trsvcid": "4420" 00:17:56.463 }, 00:17:56.463 "peer_address": { 00:17:56.463 "trtype": "TCP", 00:17:56.463 "adrfam": "IPv4", 00:17:56.463 "traddr": "10.0.0.1", 00:17:56.463 "trsvcid": "40776" 00:17:56.463 }, 00:17:56.463 "auth": { 00:17:56.463 "state": "completed", 00:17:56.463 "digest": "sha512", 00:17:56.463 "dhgroup": "ffdhe6144" 00:17:56.463 } 00:17:56.463 } 00:17:56.463 ]' 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.463 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.722 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.980 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:56.981 19:25:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.548 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.115 00:17:58.115 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.115 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.115 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.396 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.396 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.396 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.396 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.396 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.396 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.396 { 00:17:58.396 "cntlid": 137, 00:17:58.396 "qid": 0, 00:17:58.396 "state": "enabled", 00:17:58.396 "thread": "nvmf_tgt_poll_group_000", 00:17:58.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:58.397 "listen_address": { 00:17:58.397 "trtype": "TCP", 00:17:58.397 "adrfam": "IPv4", 00:17:58.397 "traddr": "10.0.0.2", 00:17:58.397 "trsvcid": "4420" 00:17:58.397 }, 00:17:58.397 "peer_address": { 00:17:58.397 "trtype": "TCP", 00:17:58.397 "adrfam": "IPv4", 00:17:58.397 "traddr": "10.0.0.1", 00:17:58.397 "trsvcid": "40792" 00:17:58.397 }, 00:17:58.397 "auth": { 00:17:58.397 "state": "completed", 00:17:58.397 "digest": "sha512", 00:17:58.397 "dhgroup": "ffdhe8192" 00:17:58.397 } 00:17:58.397 } 00:17:58.397 ]' 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.397 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.690 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:58.690 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.321 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.580 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.838 00:17:59.838 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.838 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.838 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.097 { 00:18:00.097 "cntlid": 139, 00:18:00.097 "qid": 0, 00:18:00.097 "state": "enabled", 00:18:00.097 "thread": "nvmf_tgt_poll_group_000", 00:18:00.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:00.097 "listen_address": { 00:18:00.097 "trtype": "TCP", 00:18:00.097 "adrfam": "IPv4", 00:18:00.097 "traddr": "10.0.0.2", 00:18:00.097 "trsvcid": "4420" 00:18:00.097 }, 00:18:00.097 "peer_address": { 00:18:00.097 "trtype": "TCP", 00:18:00.097 "adrfam": "IPv4", 00:18:00.097 "traddr": "10.0.0.1", 00:18:00.097 "trsvcid": "40822" 00:18:00.097 }, 00:18:00.097 "auth": { 00:18:00.097 "state": "completed", 00:18:00.097 "digest": "sha512", 00:18:00.097 "dhgroup": "ffdhe8192" 00:18:00.097 } 00:18:00.097 } 00:18:00.097 ]' 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.097 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.356 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:00.356 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.356 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.356 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.356 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.356 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:18:00.614 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: --dhchap-ctrl-secret DHHC-1:02:Y2MwZTZkNjEyYTFmNjQ2MTM5Yzg4MGE1OWNlNmExN2M1YWNiNzI0NmQ2Mjc3OGM3XSCC5g==: 00:18:01.183 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.183 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:01.183 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.183 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.183 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.184 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.752 00:18:01.752 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.752 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.752 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.012 { 00:18:02.012 "cntlid": 141, 00:18:02.012 "qid": 0, 00:18:02.012 "state": "enabled", 00:18:02.012 "thread": "nvmf_tgt_poll_group_000", 00:18:02.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:02.012 "listen_address": { 00:18:02.012 "trtype": "TCP", 00:18:02.012 "adrfam": "IPv4", 00:18:02.012 "traddr": "10.0.0.2", 00:18:02.012 "trsvcid": "4420" 00:18:02.012 }, 00:18:02.012 "peer_address": { 00:18:02.012 "trtype": "TCP", 00:18:02.012 "adrfam": "IPv4", 00:18:02.012 "traddr": "10.0.0.1", 00:18:02.012 "trsvcid": "37810" 00:18:02.012 }, 00:18:02.012 "auth": { 00:18:02.012 "state": "completed", 00:18:02.012 "digest": "sha512", 00:18:02.012 "dhgroup": "ffdhe8192" 00:18:02.012 } 00:18:02.012 } 00:18:02.012 ]' 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.012 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.271 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:18:02.271 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:01:MzMxMzE3Y2I2MzA3Njc4MmJlMTc3ZjM2NTVkZWI3ODmCul67: 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:02.839 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.098 19:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.667 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.667 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.667 { 00:18:03.667 "cntlid": 143, 00:18:03.667 "qid": 0, 00:18:03.667 "state": "enabled", 00:18:03.667 "thread": "nvmf_tgt_poll_group_000", 00:18:03.668 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:03.668 "listen_address": { 00:18:03.668 "trtype": "TCP", 00:18:03.668 "adrfam": "IPv4", 00:18:03.668 "traddr": "10.0.0.2", 00:18:03.668 "trsvcid": "4420" 00:18:03.668 }, 00:18:03.668 "peer_address": { 00:18:03.668 "trtype": "TCP", 00:18:03.668 "adrfam": "IPv4", 00:18:03.668 "traddr": "10.0.0.1", 00:18:03.668 "trsvcid": "37832" 00:18:03.668 }, 00:18:03.668 "auth": { 00:18:03.668 "state": "completed", 00:18:03.668 "digest": "sha512", 00:18:03.668 "dhgroup": "ffdhe8192" 00:18:03.668 } 00:18:03.668 } 00:18:03.668 ]' 00:18:03.668 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.927 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.187 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:04.187 19:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:04.754 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.754 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:04.754 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.754 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.754 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.755 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.324 00:18:05.324 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.324 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.324 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:05.583 { 00:18:05.583 "cntlid": 145, 00:18:05.583 "qid": 0, 00:18:05.583 "state": "enabled", 00:18:05.583 "thread": "nvmf_tgt_poll_group_000", 00:18:05.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:05.583 "listen_address": { 00:18:05.583 "trtype": "TCP", 00:18:05.583 "adrfam": "IPv4", 00:18:05.583 "traddr": "10.0.0.2", 00:18:05.583 "trsvcid": "4420" 00:18:05.583 }, 00:18:05.583 "peer_address": { 00:18:05.583 "trtype": "TCP", 00:18:05.583 "adrfam": "IPv4", 00:18:05.583 "traddr": "10.0.0.1", 00:18:05.583 "trsvcid": "37866" 00:18:05.583 }, 00:18:05.583 "auth": { 00:18:05.583 "state": "completed", 00:18:05.583 "digest": "sha512", 00:18:05.583 "dhgroup": "ffdhe8192" 00:18:05.583 } 00:18:05.583 } 00:18:05.583 ]' 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.583 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.842 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:18:05.842 19:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MjYwMWU4ZWVjODYxMTU0YzM3NjEzOTFlNDA0NmU0MzM1MmNmZTY3NGVlNGRkNDhjfIb/Qg==: --dhchap-ctrl-secret DHHC-1:03:MzQ3NmRiMmFjMDg1Nzg4ZTdkNTQ0MDAwNGQ5OWUwMzg5NTRiNjE0MGFlNWVlYTdlODEzZjkyODIyNDE2Y2YyMOM8zjU=: 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:06.409 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:06.410 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:06.977 request: 00:18:06.977 { 00:18:06.977 "name": "nvme0", 00:18:06.977 "trtype": "tcp", 00:18:06.977 "traddr": "10.0.0.2", 00:18:06.977 "adrfam": "ipv4", 00:18:06.977 "trsvcid": "4420", 00:18:06.977 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:06.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:06.977 "prchk_reftag": false, 00:18:06.977 "prchk_guard": false, 00:18:06.977 "hdgst": false, 00:18:06.977 "ddgst": false, 00:18:06.977 "dhchap_key": "key2", 00:18:06.977 "allow_unrecognized_csi": false, 00:18:06.977 "method": "bdev_nvme_attach_controller", 00:18:06.977 "req_id": 1 00:18:06.977 } 00:18:06.977 Got JSON-RPC error response 00:18:06.977 response: 00:18:06.977 { 00:18:06.977 "code": -5, 00:18:06.977 "message": "Input/output error" 00:18:06.977 } 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:06.977 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.978 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:06.978 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:07.236 request: 00:18:07.236 { 00:18:07.236 "name": "nvme0", 00:18:07.236 "trtype": "tcp", 00:18:07.236 "traddr": "10.0.0.2", 00:18:07.236 "adrfam": "ipv4", 00:18:07.236 "trsvcid": "4420", 00:18:07.236 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.236 "prchk_reftag": false, 00:18:07.236 "prchk_guard": false, 00:18:07.236 "hdgst": false, 00:18:07.236 "ddgst": false, 00:18:07.236 "dhchap_key": "key1", 00:18:07.236 "dhchap_ctrlr_key": "ckey2", 00:18:07.236 "allow_unrecognized_csi": false, 00:18:07.236 "method": "bdev_nvme_attach_controller", 00:18:07.237 "req_id": 1 00:18:07.237 } 00:18:07.237 Got JSON-RPC error response 00:18:07.237 response: 00:18:07.237 { 00:18:07.237 "code": -5, 00:18:07.237 "message": "Input/output error" 00:18:07.237 } 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.496 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.755 request: 00:18:07.755 { 00:18:07.755 "name": "nvme0", 00:18:07.755 "trtype": "tcp", 00:18:07.755 "traddr": "10.0.0.2", 00:18:07.755 "adrfam": "ipv4", 00:18:07.755 "trsvcid": "4420", 00:18:07.755 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:07.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:07.755 "prchk_reftag": false, 00:18:07.755 "prchk_guard": false, 00:18:07.755 "hdgst": false, 00:18:07.755 "ddgst": false, 00:18:07.755 "dhchap_key": "key1", 00:18:07.755 "dhchap_ctrlr_key": "ckey1", 00:18:07.755 "allow_unrecognized_csi": false, 00:18:07.755 "method": "bdev_nvme_attach_controller", 00:18:07.755 "req_id": 1 00:18:07.755 } 00:18:07.755 Got JSON-RPC error response 00:18:07.755 response: 00:18:07.755 { 00:18:07.755 "code": -5, 00:18:07.755 "message": "Input/output error" 00:18:07.755 } 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2081468 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2081468 ']' 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2081468 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:07.755 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081468 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081468' 00:18:08.015 killing process with pid 2081468 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2081468 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2081468 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=2103348 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 2103348 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2103348 ']' 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.015 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2103348 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2103348 ']' 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:08.274 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.533 null0 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ikM 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.tRl ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tRl 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vSD 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Ora ]] 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ora 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.533 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xAC 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Kta ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kta 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.grL 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.793 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.794 19:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.361 nvme0n1 00:18:09.361 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.361 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.361 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.620 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.620 { 00:18:09.620 "cntlid": 1, 00:18:09.620 "qid": 0, 00:18:09.620 "state": "enabled", 00:18:09.620 "thread": "nvmf_tgt_poll_group_000", 00:18:09.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.620 "listen_address": { 00:18:09.620 "trtype": "TCP", 00:18:09.620 "adrfam": "IPv4", 00:18:09.620 "traddr": "10.0.0.2", 00:18:09.620 "trsvcid": "4420" 00:18:09.621 }, 00:18:09.621 "peer_address": { 00:18:09.621 "trtype": "TCP", 00:18:09.621 "adrfam": "IPv4", 00:18:09.621 "traddr": "10.0.0.1", 00:18:09.621 "trsvcid": "37910" 00:18:09.621 }, 00:18:09.621 "auth": { 00:18:09.621 "state": "completed", 00:18:09.621 "digest": "sha512", 00:18:09.621 "dhgroup": "ffdhe8192" 00:18:09.621 } 00:18:09.621 } 00:18:09.621 ]' 00:18:09.621 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.621 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.621 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.621 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.621 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.880 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.880 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.880 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.880 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:09.880 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.448 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:10.449 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.449 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.449 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.449 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:10.449 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.707 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.966 request: 00:18:10.966 { 00:18:10.966 "name": "nvme0", 00:18:10.966 "trtype": "tcp", 00:18:10.966 "traddr": "10.0.0.2", 00:18:10.966 "adrfam": "ipv4", 00:18:10.966 "trsvcid": "4420", 00:18:10.966 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:10.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.966 "prchk_reftag": false, 00:18:10.966 "prchk_guard": false, 00:18:10.966 "hdgst": false, 00:18:10.966 "ddgst": false, 00:18:10.966 "dhchap_key": "key3", 00:18:10.966 "allow_unrecognized_csi": false, 00:18:10.966 "method": "bdev_nvme_attach_controller", 00:18:10.966 "req_id": 1 00:18:10.966 } 00:18:10.966 Got JSON-RPC error response 00:18:10.966 response: 00:18:10.966 { 00:18:10.966 "code": -5, 00:18:10.966 "message": "Input/output error" 00:18:10.966 } 00:18:10.966 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:10.966 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:10.967 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:11.225 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:11.225 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:11.225 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.226 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:11.226 request: 00:18:11.226 { 00:18:11.226 "name": "nvme0", 00:18:11.226 "trtype": "tcp", 00:18:11.226 "traddr": "10.0.0.2", 00:18:11.226 "adrfam": "ipv4", 00:18:11.226 "trsvcid": "4420", 00:18:11.226 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:11.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.226 "prchk_reftag": false, 00:18:11.226 "prchk_guard": false, 00:18:11.226 "hdgst": false, 00:18:11.226 "ddgst": false, 00:18:11.226 "dhchap_key": "key3", 00:18:11.226 "allow_unrecognized_csi": false, 00:18:11.226 "method": "bdev_nvme_attach_controller", 00:18:11.226 "req_id": 1 00:18:11.226 } 00:18:11.226 Got JSON-RPC error response 00:18:11.226 response: 00:18:11.226 { 00:18:11.226 "code": -5, 00:18:11.226 "message": "Input/output error" 00:18:11.226 } 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:11.485 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:12.053 request: 00:18:12.053 { 00:18:12.053 "name": "nvme0", 00:18:12.053 "trtype": "tcp", 00:18:12.053 "traddr": "10.0.0.2", 00:18:12.053 "adrfam": "ipv4", 00:18:12.053 "trsvcid": "4420", 00:18:12.053 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.053 "prchk_reftag": false, 00:18:12.053 "prchk_guard": false, 00:18:12.053 "hdgst": false, 00:18:12.053 "ddgst": false, 00:18:12.053 "dhchap_key": "key0", 00:18:12.053 "dhchap_ctrlr_key": "key1", 00:18:12.053 "allow_unrecognized_csi": false, 00:18:12.053 "method": "bdev_nvme_attach_controller", 00:18:12.053 "req_id": 1 00:18:12.053 } 00:18:12.053 Got JSON-RPC error response 00:18:12.053 response: 00:18:12.053 { 00:18:12.053 "code": -5, 00:18:12.053 "message": "Input/output error" 00:18:12.053 } 00:18:12.053 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:12.053 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:12.053 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:12.053 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:12.053 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:12.054 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:12.054 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:12.054 nvme0n1 00:18:12.313 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:12.313 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.313 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:12.313 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.313 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.313 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:12.572 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:13.506 nvme0n1 00:18:13.506 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:13.506 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:13.506 19:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.506 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:13.766 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.766 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:13.766 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: --dhchap-ctrl-secret DHHC-1:03:MWJhNzdmNTdkOWNiMGJlYzE2M2I1YjEyNGFhMjdhYTZlNjMzZDVmYzBkMTQxYWI0YjM0YWE3NmNjNjNkZTYwNhZIvmM=: 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.334 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:14.593 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:14.851 request: 00:18:14.851 { 00:18:14.851 "name": "nvme0", 00:18:14.851 "trtype": "tcp", 00:18:14.851 "traddr": "10.0.0.2", 00:18:14.851 "adrfam": "ipv4", 00:18:14.851 "trsvcid": "4420", 00:18:14.851 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:14.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.851 "prchk_reftag": false, 00:18:14.851 "prchk_guard": false, 00:18:14.851 "hdgst": false, 00:18:14.851 "ddgst": false, 00:18:14.851 "dhchap_key": "key1", 00:18:14.851 "allow_unrecognized_csi": false, 00:18:14.851 "method": "bdev_nvme_attach_controller", 00:18:14.851 "req_id": 1 00:18:14.851 } 00:18:14.851 Got JSON-RPC error response 00:18:14.851 response: 00:18:14.851 { 00:18:14.851 "code": -5, 00:18:14.851 "message": "Input/output error" 00:18:14.851 } 00:18:14.851 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:14.851 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:14.851 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:14.851 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:14.852 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:14.852 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:14.852 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:15.787 nvme0n1 00:18:15.787 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:15.787 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:15.788 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.788 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.788 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.788 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:16.047 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:16.306 nvme0n1 00:18:16.306 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:16.306 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:16.306 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.564 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.564 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.565 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: '' 2s 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: ]] 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDlmMWJmMWNkMTU4YzI1YzA0NmJjZGY2YjVkMzQ3ODGEvqfn: 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:16.824 19:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:18.725 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: 2s 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: ]] 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTgyMTU3ZmY4M2Y3ZWM2NGJjNzhlZDFhYTBlZWY0NmYxZWViZDMwNDYxMTUxMzE2xUWWAA==: 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:18.726 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.260 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:21.519 nvme0n1 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:21.777 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.036 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:22.036 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:22.036 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.295 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.295 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:22.295 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.295 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.295 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.295 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:22.295 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:22.554 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:22.554 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:22.554 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:22.813 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:23.072 request: 00:18:23.072 { 00:18:23.072 "name": "nvme0", 00:18:23.072 "dhchap_key": "key1", 00:18:23.072 "dhchap_ctrlr_key": "key3", 00:18:23.072 "method": "bdev_nvme_set_keys", 00:18:23.072 "req_id": 1 00:18:23.072 } 00:18:23.072 Got JSON-RPC error response 00:18:23.072 response: 00:18:23.072 { 00:18:23.072 "code": -13, 00:18:23.072 "message": "Permission denied" 00:18:23.072 } 00:18:23.072 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:23.072 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:23.072 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:23.072 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:23.331 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:23.331 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:23.331 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.331 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:23.331 19:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:24.705 19:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:25.374 nvme0n1 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:25.374 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:25.942 request: 00:18:25.942 { 00:18:25.942 "name": "nvme0", 00:18:25.942 "dhchap_key": "key2", 00:18:25.942 "dhchap_ctrlr_key": "key0", 00:18:25.942 "method": "bdev_nvme_set_keys", 00:18:25.942 "req_id": 1 00:18:25.942 } 00:18:25.942 Got JSON-RPC error response 00:18:25.942 response: 00:18:25.942 { 00:18:25.942 "code": -13, 00:18:25.942 "message": "Permission denied" 00:18:25.942 } 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:25.942 19:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2081488 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2081488 ']' 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2081488 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2081488 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2081488' 00:18:27.318 killing process with pid 2081488 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2081488 00:18:27.318 19:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2081488 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:27.577 rmmod nvme_tcp 00:18:27.577 rmmod nvme_fabrics 00:18:27.577 rmmod nvme_keyring 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 2103348 ']' 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 2103348 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2103348 ']' 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2103348 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:27.577 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2103348 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2103348' 00:18:27.837 killing process with pid 2103348 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2103348 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2103348 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.837 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ikM /tmp/spdk.key-sha256.vSD /tmp/spdk.key-sha384.xAC /tmp/spdk.key-sha512.grL /tmp/spdk.key-sha512.tRl /tmp/spdk.key-sha384.Ora /tmp/spdk.key-sha256.Kta '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:30.374 00:18:30.374 real 2m31.131s 00:18:30.374 user 5m47.887s 00:18:30.374 sys 0m24.362s 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.374 ************************************ 00:18:30.374 END TEST nvmf_auth_target 00:18:30.374 ************************************ 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:30.374 ************************************ 00:18:30.374 START TEST nvmf_bdevio_no_huge 00:18:30.374 ************************************ 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:30.374 * Looking for test storage... 00:18:30.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:30.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.374 --rc genhtml_branch_coverage=1 00:18:30.374 --rc genhtml_function_coverage=1 00:18:30.374 --rc genhtml_legend=1 00:18:30.374 --rc geninfo_all_blocks=1 00:18:30.374 --rc geninfo_unexecuted_blocks=1 00:18:30.374 00:18:30.374 ' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:30.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.374 --rc genhtml_branch_coverage=1 00:18:30.374 --rc genhtml_function_coverage=1 00:18:30.374 --rc genhtml_legend=1 00:18:30.374 --rc geninfo_all_blocks=1 00:18:30.374 --rc geninfo_unexecuted_blocks=1 00:18:30.374 00:18:30.374 ' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:30.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.374 --rc genhtml_branch_coverage=1 00:18:30.374 --rc genhtml_function_coverage=1 00:18:30.374 --rc genhtml_legend=1 00:18:30.374 --rc geninfo_all_blocks=1 00:18:30.374 --rc geninfo_unexecuted_blocks=1 00:18:30.374 00:18:30.374 ' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:30.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.374 --rc genhtml_branch_coverage=1 00:18:30.374 --rc genhtml_function_coverage=1 00:18:30.374 --rc genhtml_legend=1 00:18:30.374 --rc geninfo_all_blocks=1 00:18:30.374 --rc geninfo_unexecuted_blocks=1 00:18:30.374 00:18:30.374 ' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.374 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:30.375 19:25:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.952 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:36.953 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:36.953 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:36.953 Found net devices under 0000:86:00.0: cvl_0_0 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:36.953 Found net devices under 0000:86:00.1: cvl_0_1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:36.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:18:36.953 00:18:36.953 --- 10.0.0.2 ping statistics --- 00:18:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.953 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:18:36.953 00:18:36.953 --- 10.0.0.1 ping statistics --- 00:18:36.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.953 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=2110154 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 2110154 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2110154 ']' 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:36.953 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.954 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:36.954 19:25:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.954 [2024-10-17 19:25:59.885854] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:18:36.954 [2024-10-17 19:25:59.885900] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:36.954 [2024-10-17 19:25:59.969214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:36.954 [2024-10-17 19:26:00.016842] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.954 [2024-10-17 19:26:00.016877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.954 [2024-10-17 19:26:00.016885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.954 [2024-10-17 19:26:00.016892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.954 [2024-10-17 19:26:00.016897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.954 [2024-10-17 19:26:00.017982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:36.954 [2024-10-17 19:26:00.018092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:36.954 [2024-10-17 19:26:00.018224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:36.954 [2024-10-17 19:26:00.018225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:36.954 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:36.954 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:36.954 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:36.954 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:36.954 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 [2024-10-17 19:26:00.772963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 Malloc0 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:37.213 [2024-10-17 19:26:00.817251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:37.213 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:18:37.214 { 00:18:37.214 "params": { 00:18:37.214 "name": "Nvme$subsystem", 00:18:37.214 "trtype": "$TEST_TRANSPORT", 00:18:37.214 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.214 "adrfam": "ipv4", 00:18:37.214 "trsvcid": "$NVMF_PORT", 00:18:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.214 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.214 "hdgst": ${hdgst:-false}, 00:18:37.214 "ddgst": ${ddgst:-false} 00:18:37.214 }, 00:18:37.214 "method": "bdev_nvme_attach_controller" 00:18:37.214 } 00:18:37.214 EOF 00:18:37.214 )") 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:18:37.214 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:18:37.214 "params": { 00:18:37.214 "name": "Nvme1", 00:18:37.214 "trtype": "tcp", 00:18:37.214 "traddr": "10.0.0.2", 00:18:37.214 "adrfam": "ipv4", 00:18:37.214 "trsvcid": "4420", 00:18:37.214 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.214 "hdgst": false, 00:18:37.214 "ddgst": false 00:18:37.214 }, 00:18:37.214 "method": "bdev_nvme_attach_controller" 00:18:37.214 }' 00:18:37.214 [2024-10-17 19:26:00.867304] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:18:37.214 [2024-10-17 19:26:00.867350] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2110402 ] 00:18:37.214 [2024-10-17 19:26:00.948826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:37.214 [2024-10-17 19:26:00.996647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.214 [2024-10-17 19:26:00.996755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.214 [2024-10-17 19:26:00.996755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.473 I/O targets: 00:18:37.473 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:37.473 00:18:37.473 00:18:37.473 CUnit - A unit testing framework for C - Version 2.1-3 00:18:37.473 http://cunit.sourceforge.net/ 00:18:37.473 00:18:37.473 00:18:37.473 Suite: bdevio tests on: Nvme1n1 00:18:37.473 Test: blockdev write read block ...passed 00:18:37.732 Test: blockdev write zeroes read block ...passed 00:18:37.732 Test: blockdev write zeroes read no split ...passed 00:18:37.732 Test: blockdev write zeroes read split ...passed 00:18:37.732 Test: blockdev write zeroes read split partial ...passed 00:18:37.732 Test: blockdev reset ...[2024-10-17 19:26:01.319926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:37.732 [2024-10-17 19:26:01.319989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d69a20 (9): Bad file descriptor 00:18:37.732 [2024-10-17 19:26:01.375151] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:37.732 passed 00:18:37.732 Test: blockdev write read 8 blocks ...passed 00:18:37.732 Test: blockdev write read size > 128k ...passed 00:18:37.732 Test: blockdev write read invalid size ...passed 00:18:37.732 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:37.732 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:37.732 Test: blockdev write read max offset ...passed 00:18:37.732 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:37.990 Test: blockdev writev readv 8 blocks ...passed 00:18:37.990 Test: blockdev writev readv 30 x 1block ...passed 00:18:37.990 Test: blockdev writev readv block ...passed 00:18:37.990 Test: blockdev writev readv size > 128k ...passed 00:18:37.990 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:37.990 Test: blockdev comparev and writev ...[2024-10-17 19:26:01.625350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.990 [2024-10-17 19:26:01.625382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.625396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.625408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.625656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.625667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.625678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.625920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.625930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.625941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.625949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.626181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.626192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:37.991 [2024-10-17 19:26:01.626211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:37.991 passed 00:18:37.991 Test: blockdev nvme passthru rw ...passed 00:18:37.991 Test: blockdev nvme passthru vendor specific ...[2024-10-17 19:26:01.708918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.991 [2024-10-17 19:26:01.708935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.709035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.991 [2024-10-17 19:26:01.709046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.709147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.991 [2024-10-17 19:26:01.709157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:37.991 [2024-10-17 19:26:01.709261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:37.991 [2024-10-17 19:26:01.709271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:37.991 passed 00:18:37.991 Test: blockdev nvme admin passthru ...passed 00:18:37.991 Test: blockdev copy ...passed 00:18:37.991 00:18:37.991 Run Summary: Type Total Ran Passed Failed Inactive 00:18:37.991 suites 1 1 n/a 0 0 00:18:37.991 tests 23 23 23 0 0 00:18:37.991 asserts 152 152 152 0 n/a 00:18:37.991 00:18:37.991 Elapsed time = 1.145 seconds 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:38.250 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:38.510 rmmod nvme_tcp 00:18:38.510 rmmod nvme_fabrics 00:18:38.510 rmmod nvme_keyring 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 2110154 ']' 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 2110154 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2110154 ']' 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2110154 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2110154 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2110154' 00:18:38.510 killing process with pid 2110154 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2110154 00:18:38.510 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2110154 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.770 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.303 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:41.303 00:18:41.303 real 0m10.820s 00:18:41.303 user 0m13.384s 00:18:41.303 sys 0m5.361s 00:18:41.303 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:41.304 ************************************ 00:18:41.304 END TEST nvmf_bdevio_no_huge 00:18:41.304 ************************************ 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.304 ************************************ 00:18:41.304 START TEST nvmf_tls 00:18:41.304 ************************************ 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:41.304 * Looking for test storage... 00:18:41.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:41.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.304 --rc genhtml_branch_coverage=1 00:18:41.304 --rc genhtml_function_coverage=1 00:18:41.304 --rc genhtml_legend=1 00:18:41.304 --rc geninfo_all_blocks=1 00:18:41.304 --rc geninfo_unexecuted_blocks=1 00:18:41.304 00:18:41.304 ' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:41.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.304 --rc genhtml_branch_coverage=1 00:18:41.304 --rc genhtml_function_coverage=1 00:18:41.304 --rc genhtml_legend=1 00:18:41.304 --rc geninfo_all_blocks=1 00:18:41.304 --rc geninfo_unexecuted_blocks=1 00:18:41.304 00:18:41.304 ' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:41.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.304 --rc genhtml_branch_coverage=1 00:18:41.304 --rc genhtml_function_coverage=1 00:18:41.304 --rc genhtml_legend=1 00:18:41.304 --rc geninfo_all_blocks=1 00:18:41.304 --rc geninfo_unexecuted_blocks=1 00:18:41.304 00:18:41.304 ' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:41.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:41.304 --rc genhtml_branch_coverage=1 00:18:41.304 --rc genhtml_function_coverage=1 00:18:41.304 --rc genhtml_legend=1 00:18:41.304 --rc geninfo_all_blocks=1 00:18:41.304 --rc geninfo_unexecuted_blocks=1 00:18:41.304 00:18:41.304 ' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:41.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:41.304 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:41.305 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:47.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:47.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:47.875 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:47.876 Found net devices under 0000:86:00.0: cvl_0_0 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:47.876 Found net devices under 0000:86:00.1: cvl_0_1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:47.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:18:47.876 00:18:47.876 --- 10.0.0.2 ping statistics --- 00:18:47.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.876 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:18:47.876 00:18:47.876 --- 10.0.0.1 ping statistics --- 00:18:47.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.876 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2114162 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2114162 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2114162 ']' 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.876 19:26:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.876 [2024-10-17 19:26:10.806229] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:18:47.876 [2024-10-17 19:26:10.806279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.876 [2024-10-17 19:26:10.888677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.876 [2024-10-17 19:26:10.928322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.876 [2024-10-17 19:26:10.928357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.876 [2024-10-17 19:26:10.928364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.876 [2024-10-17 19:26:10.928370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.876 [2024-10-17 19:26:10.928376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.876 [2024-10-17 19:26:10.928927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.876 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:47.876 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:47.876 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:47.876 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:47.876 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.135 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.135 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:48.135 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.135 true 00:18:48.135 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.135 19:26:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:48.394 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:48.394 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:48.394 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.653 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.653 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:48.912 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:48.912 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:48.912 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:48.912 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.912 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:49.171 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:49.172 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:49.172 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:49.172 19:26:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.430 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:49.430 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:49.431 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:49.431 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.431 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:49.689 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:49.689 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:49.689 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:49.949 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.949 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:50.207 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xPMr8qNdlV 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.yvslCCdi6P 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xPMr8qNdlV 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.yvslCCdi6P 00:18:50.208 19:26:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:50.466 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:50.726 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xPMr8qNdlV 00:18:50.726 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xPMr8qNdlV 00:18:50.726 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.726 [2024-10-17 19:26:14.460902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.726 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.985 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.244 [2024-10-17 19:26:14.829850] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.244 [2024-10-17 19:26:14.830084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.244 19:26:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.244 malloc0 00:18:51.503 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.503 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xPMr8qNdlV 00:18:51.761 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.020 19:26:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xPMr8qNdlV 00:19:01.998 Initializing NVMe Controllers 00:19:01.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:01.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:01.998 Initialization complete. Launching workers. 00:19:01.998 ======================================================== 00:19:01.998 Latency(us) 00:19:01.998 Device Information : IOPS MiB/s Average min max 00:19:01.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16849.02 65.82 3798.51 786.36 6357.68 00:19:01.998 ======================================================== 00:19:01.998 Total : 16849.02 65.82 3798.51 786.36 6357.68 00:19:01.998 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xPMr8qNdlV 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xPMr8qNdlV 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2116718 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2116718 /var/tmp/bdevperf.sock 00:19:01.998 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2116718 ']' 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:01.999 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.999 [2024-10-17 19:26:25.748289] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:01.999 [2024-10-17 19:26:25.748337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116718 ] 00:19:02.257 [2024-10-17 19:26:25.822697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.257 [2024-10-17 19:26:25.863762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.258 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.258 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:02.258 19:26:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xPMr8qNdlV 00:19:02.516 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.516 [2024-10-17 19:26:26.289308] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.775 TLSTESTn1 00:19:02.775 19:26:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.775 Running I/O for 10 seconds... 00:19:04.800 5220.00 IOPS, 20.39 MiB/s [2024-10-17T17:26:29.527Z] 5123.00 IOPS, 20.01 MiB/s [2024-10-17T17:26:30.904Z] 5022.00 IOPS, 19.62 MiB/s [2024-10-17T17:26:31.840Z] 5018.00 IOPS, 19.60 MiB/s [2024-10-17T17:26:32.777Z] 5004.20 IOPS, 19.55 MiB/s [2024-10-17T17:26:33.715Z] 4986.00 IOPS, 19.48 MiB/s [2024-10-17T17:26:34.654Z] 4994.29 IOPS, 19.51 MiB/s [2024-10-17T17:26:35.591Z] 4981.25 IOPS, 19.46 MiB/s [2024-10-17T17:26:36.529Z] 4960.78 IOPS, 19.38 MiB/s [2024-10-17T17:26:36.529Z] 4938.20 IOPS, 19.29 MiB/s 00:19:12.745 Latency(us) 00:19:12.745 [2024-10-17T17:26:36.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.745 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:12.745 Verification LBA range: start 0x0 length 0x2000 00:19:12.745 TLSTESTn1 : 10.02 4942.71 19.31 0.00 0.00 25860.92 5554.96 31207.62 00:19:12.745 [2024-10-17T17:26:36.529Z] =================================================================================================================== 00:19:12.745 [2024-10-17T17:26:36.529Z] Total : 4942.71 19.31 0.00 0.00 25860.92 5554.96 31207.62 00:19:12.745 { 00:19:12.745 "results": [ 00:19:12.745 { 00:19:12.745 "job": "TLSTESTn1", 00:19:12.745 "core_mask": "0x4", 00:19:12.745 "workload": "verify", 00:19:12.745 "status": "finished", 00:19:12.745 "verify_range": { 00:19:12.745 "start": 0, 00:19:12.745 "length": 8192 00:19:12.745 }, 00:19:12.745 "queue_depth": 128, 00:19:12.745 "io_size": 4096, 00:19:12.745 "runtime": 10.016776, 00:19:12.745 "iops": 4942.708112869849, 00:19:12.745 "mibps": 19.30745356589785, 00:19:12.745 "io_failed": 0, 00:19:12.745 "io_timeout": 0, 00:19:12.745 "avg_latency_us": 25860.92155451039, 00:19:12.745 "min_latency_us": 5554.95619047619, 00:19:12.745 "max_latency_us": 31207.619047619046 00:19:12.745 } 00:19:12.745 ], 00:19:12.745 "core_count": 1 00:19:12.745 } 00:19:12.746 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.746 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2116718 00:19:12.746 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2116718 ']' 00:19:12.746 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2116718 00:19:12.746 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2116718 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2116718' 00:19:13.005 killing process with pid 2116718 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2116718 00:19:13.005 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.005 00:19:13.005 Latency(us) 00:19:13.005 [2024-10-17T17:26:36.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.005 [2024-10-17T17:26:36.789Z] =================================================================================================================== 00:19:13.005 [2024-10-17T17:26:36.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2116718 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yvslCCdi6P 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yvslCCdi6P 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yvslCCdi6P 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yvslCCdi6P 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2118360 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2118360 /var/tmp/bdevperf.sock 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2118360 ']' 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.005 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.005 [2024-10-17 19:26:36.783968] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:13.005 [2024-10-17 19:26:36.784015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118360 ] 00:19:13.264 [2024-10-17 19:26:36.859314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.264 [2024-10-17 19:26:36.900445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.264 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.264 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:13.264 19:26:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yvslCCdi6P 00:19:13.523 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.781 [2024-10-17 19:26:37.337876] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.782 [2024-10-17 19:26:37.342604] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.782 [2024-10-17 19:26:37.343214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d04240 (107): Transport endpoint is not connected 00:19:13.782 [2024-10-17 19:26:37.344206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d04240 (9): Bad file descriptor 00:19:13.782 [2024-10-17 19:26:37.345206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.782 [2024-10-17 19:26:37.345217] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.782 [2024-10-17 19:26:37.345224] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:13.782 [2024-10-17 19:26:37.345234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.782 request: 00:19:13.782 { 00:19:13.782 "name": "TLSTEST", 00:19:13.782 "trtype": "tcp", 00:19:13.782 "traddr": "10.0.0.2", 00:19:13.782 "adrfam": "ipv4", 00:19:13.782 "trsvcid": "4420", 00:19:13.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.782 "prchk_reftag": false, 00:19:13.782 "prchk_guard": false, 00:19:13.782 "hdgst": false, 00:19:13.782 "ddgst": false, 00:19:13.782 "psk": "key0", 00:19:13.782 "allow_unrecognized_csi": false, 00:19:13.782 "method": "bdev_nvme_attach_controller", 00:19:13.782 "req_id": 1 00:19:13.782 } 00:19:13.782 Got JSON-RPC error response 00:19:13.782 response: 00:19:13.782 { 00:19:13.782 "code": -5, 00:19:13.782 "message": "Input/output error" 00:19:13.782 } 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2118360 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2118360 ']' 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2118360 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118360 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118360' 00:19:13.782 killing process with pid 2118360 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2118360 00:19:13.782 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.782 00:19:13.782 Latency(us) 00:19:13.782 [2024-10-17T17:26:37.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.782 [2024-10-17T17:26:37.566Z] =================================================================================================================== 00:19:13.782 [2024-10-17T17:26:37.566Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.782 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2118360 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xPMr8qNdlV 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xPMr8qNdlV 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xPMr8qNdlV 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xPMr8qNdlV 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2118587 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2118587 /var/tmp/bdevperf.sock 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2118587 ']' 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.041 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.041 [2024-10-17 19:26:37.628750] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:14.041 [2024-10-17 19:26:37.628797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118587 ] 00:19:14.041 [2024-10-17 19:26:37.701877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.041 [2024-10-17 19:26:37.738122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.303 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.304 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.304 19:26:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xPMr8qNdlV 00:19:14.304 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:14.562 [2024-10-17 19:26:38.192148] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.562 [2024-10-17 19:26:38.202887] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:14.562 [2024-10-17 19:26:38.202916] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:14.562 [2024-10-17 19:26:38.202942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.562 [2024-10-17 19:26:38.203632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e240 (107): Transport endpoint is not connected 00:19:14.562 [2024-10-17 19:26:38.204625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56e240 (9): Bad file descriptor 00:19:14.562 [2024-10-17 19:26:38.205626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.562 [2024-10-17 19:26:38.205637] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.562 [2024-10-17 19:26:38.205646] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:14.562 [2024-10-17 19:26:38.205657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.562 request: 00:19:14.562 { 00:19:14.562 "name": "TLSTEST", 00:19:14.562 "trtype": "tcp", 00:19:14.562 "traddr": "10.0.0.2", 00:19:14.562 "adrfam": "ipv4", 00:19:14.562 "trsvcid": "4420", 00:19:14.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:14.562 "prchk_reftag": false, 00:19:14.562 "prchk_guard": false, 00:19:14.562 "hdgst": false, 00:19:14.562 "ddgst": false, 00:19:14.562 "psk": "key0", 00:19:14.562 "allow_unrecognized_csi": false, 00:19:14.562 "method": "bdev_nvme_attach_controller", 00:19:14.562 "req_id": 1 00:19:14.562 } 00:19:14.562 Got JSON-RPC error response 00:19:14.562 response: 00:19:14.562 { 00:19:14.562 "code": -5, 00:19:14.562 "message": "Input/output error" 00:19:14.562 } 00:19:14.562 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2118587 00:19:14.562 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2118587 ']' 00:19:14.562 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2118587 00:19:14.562 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:14.562 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118587 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118587' 00:19:14.563 killing process with pid 2118587 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2118587 00:19:14.563 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.563 00:19:14.563 Latency(us) 00:19:14.563 [2024-10-17T17:26:38.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.563 [2024-10-17T17:26:38.347Z] =================================================================================================================== 00:19:14.563 [2024-10-17T17:26:38.347Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.563 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2118587 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xPMr8qNdlV 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xPMr8qNdlV 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xPMr8qNdlV 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xPMr8qNdlV 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2118775 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2118775 /var/tmp/bdevperf.sock 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2118775 ']' 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:14.822 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.822 [2024-10-17 19:26:38.485881] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:14.822 [2024-10-17 19:26:38.485931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118775 ] 00:19:14.822 [2024-10-17 19:26:38.562114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.822 [2024-10-17 19:26:38.600488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.082 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.082 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.082 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xPMr8qNdlV 00:19:15.341 19:26:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:15.341 [2024-10-17 19:26:39.058076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.341 [2024-10-17 19:26:39.065008] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:15.341 [2024-10-17 19:26:39.065029] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:15.341 [2024-10-17 19:26:39.065052] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.341 [2024-10-17 19:26:39.065294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4240 (107): Transport endpoint is not connected 00:19:15.341 [2024-10-17 19:26:39.066287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd4240 (9): Bad file descriptor 00:19:15.341 [2024-10-17 19:26:39.067290] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:15.341 [2024-10-17 19:26:39.067300] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.341 [2024-10-17 19:26:39.067307] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:15.341 [2024-10-17 19:26:39.067316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:15.341 request: 00:19:15.341 { 00:19:15.341 "name": "TLSTEST", 00:19:15.341 "trtype": "tcp", 00:19:15.341 "traddr": "10.0.0.2", 00:19:15.341 "adrfam": "ipv4", 00:19:15.341 "trsvcid": "4420", 00:19:15.341 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:15.341 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.341 "prchk_reftag": false, 00:19:15.341 "prchk_guard": false, 00:19:15.341 "hdgst": false, 00:19:15.341 "ddgst": false, 00:19:15.341 "psk": "key0", 00:19:15.341 "allow_unrecognized_csi": false, 00:19:15.341 "method": "bdev_nvme_attach_controller", 00:19:15.341 "req_id": 1 00:19:15.341 } 00:19:15.341 Got JSON-RPC error response 00:19:15.341 response: 00:19:15.341 { 00:19:15.341 "code": -5, 00:19:15.341 "message": "Input/output error" 00:19:15.341 } 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2118775 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2118775 ']' 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2118775 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.341 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118775 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118775' 00:19:15.601 killing process with pid 2118775 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2118775 00:19:15.601 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.601 00:19:15.601 Latency(us) 00:19:15.601 [2024-10-17T17:26:39.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.601 [2024-10-17T17:26:39.385Z] =================================================================================================================== 00:19:15.601 [2024-10-17T17:26:39.385Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2118775 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:15.601 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2118841 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2118841 /var/tmp/bdevperf.sock 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2118841 ']' 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.602 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.602 [2024-10-17 19:26:39.345229] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:15.602 [2024-10-17 19:26:39.345277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118841 ] 00:19:15.861 [2024-10-17 19:26:39.413842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.861 [2024-10-17 19:26:39.450615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.861 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:15.861 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:15.861 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:16.121 [2024-10-17 19:26:39.728235] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:16.121 [2024-10-17 19:26:39.728273] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:16.121 request: 00:19:16.121 { 00:19:16.121 "name": "key0", 00:19:16.121 "path": "", 00:19:16.121 "method": "keyring_file_add_key", 00:19:16.121 "req_id": 1 00:19:16.121 } 00:19:16.121 Got JSON-RPC error response 00:19:16.121 response: 00:19:16.121 { 00:19:16.121 "code": -1, 00:19:16.121 "message": "Operation not permitted" 00:19:16.121 } 00:19:16.121 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.380 [2024-10-17 19:26:39.932873] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.380 [2024-10-17 19:26:39.932903] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:16.380 request: 00:19:16.380 { 00:19:16.380 "name": "TLSTEST", 00:19:16.380 "trtype": "tcp", 00:19:16.380 "traddr": "10.0.0.2", 00:19:16.380 "adrfam": "ipv4", 00:19:16.380 "trsvcid": "4420", 00:19:16.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:16.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.380 "prchk_reftag": false, 00:19:16.380 "prchk_guard": false, 00:19:16.380 "hdgst": false, 00:19:16.380 "ddgst": false, 00:19:16.380 "psk": "key0", 00:19:16.380 "allow_unrecognized_csi": false, 00:19:16.380 "method": "bdev_nvme_attach_controller", 00:19:16.380 "req_id": 1 00:19:16.380 } 00:19:16.380 Got JSON-RPC error response 00:19:16.380 response: 00:19:16.380 { 00:19:16.380 "code": -126, 00:19:16.380 "message": "Required key not available" 00:19:16.380 } 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2118841 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2118841 ']' 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2118841 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.380 19:26:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2118841 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2118841' 00:19:16.380 killing process with pid 2118841 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2118841 00:19:16.380 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.380 00:19:16.380 Latency(us) 00:19:16.380 [2024-10-17T17:26:40.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.380 [2024-10-17T17:26:40.164Z] =================================================================================================================== 00:19:16.380 [2024-10-17T17:26:40.164Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2118841 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2114162 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2114162 ']' 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2114162 00:19:16.380 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2114162 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2114162' 00:19:16.639 killing process with pid 2114162 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2114162 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2114162 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.639 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:16.898 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5XHsGmfKEg 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5XHsGmfKEg 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2119087 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2119087 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2119087 ']' 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.899 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.899 [2024-10-17 19:26:40.482964] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:16.899 [2024-10-17 19:26:40.483008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.899 [2024-10-17 19:26:40.561106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.899 [2024-10-17 19:26:40.601429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.899 [2024-10-17 19:26:40.601468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.899 [2024-10-17 19:26:40.601476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.899 [2024-10-17 19:26:40.601482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.899 [2024-10-17 19:26:40.601487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.899 [2024-10-17 19:26:40.602073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5XHsGmfKEg 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:17.158 [2024-10-17 19:26:40.893663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.158 19:26:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:17.417 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:17.676 [2024-10-17 19:26:41.286689] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:17.676 [2024-10-17 19:26:41.286904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:17.676 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:17.936 malloc0 00:19:17.936 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:17.936 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:18.195 19:26:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XHsGmfKEg 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5XHsGmfKEg 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2119338 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2119338 /var/tmp/bdevperf.sock 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2119338 ']' 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:18.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:18.455 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:18.455 [2024-10-17 19:26:42.118718] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:18.455 [2024-10-17 19:26:42.118767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119338 ] 00:19:18.455 [2024-10-17 19:26:42.185345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.455 [2024-10-17 19:26:42.225204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.714 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:18.714 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:18.714 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:18.973 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:18.973 [2024-10-17 19:26:42.690856] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:19.232 TLSTESTn1 00:19:19.232 19:26:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:19.232 Running I/O for 10 seconds... 00:19:21.106 5392.00 IOPS, 21.06 MiB/s [2024-10-17T17:26:46.269Z] 5493.00 IOPS, 21.46 MiB/s [2024-10-17T17:26:47.207Z] 5565.00 IOPS, 21.74 MiB/s [2024-10-17T17:26:48.144Z] 5587.00 IOPS, 21.82 MiB/s [2024-10-17T17:26:49.085Z] 5595.80 IOPS, 21.86 MiB/s [2024-10-17T17:26:50.023Z] 5531.33 IOPS, 21.61 MiB/s [2024-10-17T17:26:50.959Z] 5476.43 IOPS, 21.39 MiB/s [2024-10-17T17:26:51.897Z] 5386.88 IOPS, 21.04 MiB/s [2024-10-17T17:26:53.277Z] 5334.00 IOPS, 20.84 MiB/s [2024-10-17T17:26:53.277Z] 5295.00 IOPS, 20.68 MiB/s 00:19:29.493 Latency(us) 00:19:29.493 [2024-10-17T17:26:53.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.493 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:29.493 Verification LBA range: start 0x0 length 0x2000 00:19:29.493 TLSTESTn1 : 10.02 5298.93 20.70 0.00 0.00 24120.39 6834.47 32455.92 00:19:29.493 [2024-10-17T17:26:53.277Z] =================================================================================================================== 00:19:29.493 [2024-10-17T17:26:53.277Z] Total : 5298.93 20.70 0.00 0.00 24120.39 6834.47 32455.92 00:19:29.493 { 00:19:29.493 "results": [ 00:19:29.493 { 00:19:29.493 "job": "TLSTESTn1", 00:19:29.493 "core_mask": "0x4", 00:19:29.493 "workload": "verify", 00:19:29.493 "status": "finished", 00:19:29.493 "verify_range": { 00:19:29.493 "start": 0, 00:19:29.493 "length": 8192 00:19:29.493 }, 00:19:29.493 "queue_depth": 128, 00:19:29.493 "io_size": 4096, 00:19:29.493 "runtime": 10.016555, 00:19:29.493 "iops": 5298.927625316289, 00:19:29.493 "mibps": 20.698936036391753, 00:19:29.493 "io_failed": 0, 00:19:29.493 "io_timeout": 0, 00:19:29.493 "avg_latency_us": 24120.3859305932, 00:19:29.493 "min_latency_us": 6834.4685714285715, 00:19:29.493 "max_latency_us": 32455.92380952381 00:19:29.493 } 00:19:29.493 ], 00:19:29.493 "core_count": 1 00:19:29.493 } 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2119338 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2119338 ']' 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2119338 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2119338 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2119338' 00:19:29.493 killing process with pid 2119338 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2119338 00:19:29.493 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.493 00:19:29.493 Latency(us) 00:19:29.493 [2024-10-17T17:26:53.277Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.493 [2024-10-17T17:26:53.277Z] =================================================================================================================== 00:19:29.493 [2024-10-17T17:26:53.277Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.493 19:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2119338 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5XHsGmfKEg 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XHsGmfKEg 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XHsGmfKEg 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XHsGmfKEg 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5XHsGmfKEg 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2121172 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2121172 /var/tmp/bdevperf.sock 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2121172 ']' 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:29.493 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.493 [2024-10-17 19:26:53.197938] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:29.493 [2024-10-17 19:26:53.197988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121172 ] 00:19:29.493 [2024-10-17 19:26:53.271551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.753 [2024-10-17 19:26:53.310798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.753 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.753 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.753 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:30.012 [2024-10-17 19:26:53.576534] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5XHsGmfKEg': 0100666 00:19:30.012 [2024-10-17 19:26:53.576568] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:30.012 request: 00:19:30.012 { 00:19:30.012 "name": "key0", 00:19:30.012 "path": "/tmp/tmp.5XHsGmfKEg", 00:19:30.012 "method": "keyring_file_add_key", 00:19:30.012 "req_id": 1 00:19:30.012 } 00:19:30.012 Got JSON-RPC error response 00:19:30.012 response: 00:19:30.012 { 00:19:30.012 "code": -1, 00:19:30.012 "message": "Operation not permitted" 00:19:30.012 } 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.012 [2024-10-17 19:26:53.761095] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:30.012 [2024-10-17 19:26:53.761122] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:30.012 request: 00:19:30.012 { 00:19:30.012 "name": "TLSTEST", 00:19:30.012 "trtype": "tcp", 00:19:30.012 "traddr": "10.0.0.2", 00:19:30.012 "adrfam": "ipv4", 00:19:30.012 "trsvcid": "4420", 00:19:30.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.012 "prchk_reftag": false, 00:19:30.012 "prchk_guard": false, 00:19:30.012 "hdgst": false, 00:19:30.012 "ddgst": false, 00:19:30.012 "psk": "key0", 00:19:30.012 "allow_unrecognized_csi": false, 00:19:30.012 "method": "bdev_nvme_attach_controller", 00:19:30.012 "req_id": 1 00:19:30.012 } 00:19:30.012 Got JSON-RPC error response 00:19:30.012 response: 00:19:30.012 { 00:19:30.012 "code": -126, 00:19:30.012 "message": "Required key not available" 00:19:30.012 } 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2121172 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2121172 ']' 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2121172 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.012 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.271 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121172 00:19:30.271 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:30.271 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:30.271 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121172' 00:19:30.271 killing process with pid 2121172 00:19:30.271 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2121172 00:19:30.271 Received shutdown signal, test time was about 10.000000 seconds 00:19:30.272 00:19:30.272 Latency(us) 00:19:30.272 [2024-10-17T17:26:54.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.272 [2024-10-17T17:26:54.056Z] =================================================================================================================== 00:19:30.272 [2024-10-17T17:26:54.056Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2121172 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2119087 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2119087 ']' 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2119087 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:30.272 19:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2119087 00:19:30.272 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:30.272 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:30.272 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2119087' 00:19:30.272 killing process with pid 2119087 00:19:30.272 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2119087 00:19:30.272 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2119087 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2121409 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2121409 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2121409 ']' 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.531 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.531 [2024-10-17 19:26:54.261517] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:30.531 [2024-10-17 19:26:54.261564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:30.790 [2024-10-17 19:26:54.342488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.790 [2024-10-17 19:26:54.380861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.790 [2024-10-17 19:26:54.380896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.790 [2024-10-17 19:26:54.380904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.790 [2024-10-17 19:26:54.380909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.790 [2024-10-17 19:26:54.380914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.790 [2024-10-17 19:26:54.381447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5XHsGmfKEg 00:19:30.790 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.049 [2024-10-17 19:26:54.691722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.049 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.308 19:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.308 [2024-10-17 19:26:55.084729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.308 [2024-10-17 19:26:55.084937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.567 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.567 malloc0 00:19:31.567 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.827 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:32.086 [2024-10-17 19:26:55.690218] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5XHsGmfKEg': 0100666 00:19:32.086 [2024-10-17 19:26:55.690243] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:32.086 request: 00:19:32.086 { 00:19:32.086 "name": "key0", 00:19:32.086 "path": "/tmp/tmp.5XHsGmfKEg", 00:19:32.086 "method": "keyring_file_add_key", 00:19:32.086 "req_id": 1 00:19:32.086 } 00:19:32.086 Got JSON-RPC error response 00:19:32.086 response: 00:19:32.086 { 00:19:32.086 "code": -1, 00:19:32.086 "message": "Operation not permitted" 00:19:32.086 } 00:19:32.086 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.345 [2024-10-17 19:26:55.878719] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:32.345 [2024-10-17 19:26:55.878753] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:32.345 request: 00:19:32.345 { 00:19:32.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.345 "host": "nqn.2016-06.io.spdk:host1", 00:19:32.345 "psk": "key0", 00:19:32.345 "method": "nvmf_subsystem_add_host", 00:19:32.345 "req_id": 1 00:19:32.345 } 00:19:32.345 Got JSON-RPC error response 00:19:32.345 response: 00:19:32.345 { 00:19:32.345 "code": -32603, 00:19:32.345 "message": "Internal error" 00:19:32.345 } 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2121409 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2121409 ']' 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2121409 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121409 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121409' 00:19:32.345 killing process with pid 2121409 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2121409 00:19:32.345 19:26:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2121409 00:19:32.345 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5XHsGmfKEg 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2121683 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2121683 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2121683 ']' 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.605 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.605 [2024-10-17 19:26:56.189165] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:32.605 [2024-10-17 19:26:56.189210] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.605 [2024-10-17 19:26:56.266604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.605 [2024-10-17 19:26:56.306185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.605 [2024-10-17 19:26:56.306217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.605 [2024-10-17 19:26:56.306224] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.605 [2024-10-17 19:26:56.306229] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.605 [2024-10-17 19:26:56.306234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.605 [2024-10-17 19:26:56.306790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5XHsGmfKEg 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:32.864 [2024-10-17 19:26:56.600240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.864 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.123 19:26:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.381 [2024-10-17 19:26:56.977213] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.381 [2024-10-17 19:26:56.977422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.381 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.643 malloc0 00:19:33.643 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.643 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:33.904 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.162 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2121939 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2121939 /var/tmp/bdevperf.sock 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2121939 ']' 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:34.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.163 19:26:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.163 [2024-10-17 19:26:57.817251] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:34.163 [2024-10-17 19:26:57.817299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121939 ] 00:19:34.163 [2024-10-17 19:26:57.890230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.163 [2024-10-17 19:26:57.930276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.422 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.422 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:34.422 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:34.681 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:34.681 [2024-10-17 19:26:58.412510] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.940 TLSTESTn1 00:19:34.940 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:35.200 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:35.200 "subsystems": [ 00:19:35.200 { 00:19:35.200 "subsystem": "keyring", 00:19:35.200 "config": [ 00:19:35.200 { 00:19:35.200 "method": "keyring_file_add_key", 00:19:35.200 "params": { 00:19:35.200 "name": "key0", 00:19:35.200 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:35.200 } 00:19:35.200 } 00:19:35.200 ] 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "subsystem": "iobuf", 00:19:35.200 "config": [ 00:19:35.200 { 00:19:35.200 "method": "iobuf_set_options", 00:19:35.200 "params": { 00:19:35.200 "small_pool_count": 8192, 00:19:35.200 "large_pool_count": 1024, 00:19:35.200 "small_bufsize": 8192, 00:19:35.200 "large_bufsize": 135168, 00:19:35.200 "enable_numa": false 00:19:35.200 } 00:19:35.200 } 00:19:35.200 ] 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "subsystem": "sock", 00:19:35.200 "config": [ 00:19:35.200 { 00:19:35.200 "method": "sock_set_default_impl", 00:19:35.200 "params": { 00:19:35.200 "impl_name": "posix" 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "sock_impl_set_options", 00:19:35.200 "params": { 00:19:35.200 "impl_name": "ssl", 00:19:35.200 "recv_buf_size": 4096, 00:19:35.200 "send_buf_size": 4096, 00:19:35.200 "enable_recv_pipe": true, 00:19:35.200 "enable_quickack": false, 00:19:35.200 "enable_placement_id": 0, 00:19:35.200 "enable_zerocopy_send_server": true, 00:19:35.200 "enable_zerocopy_send_client": false, 00:19:35.200 "zerocopy_threshold": 0, 00:19:35.200 "tls_version": 0, 00:19:35.200 "enable_ktls": false 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "sock_impl_set_options", 00:19:35.200 "params": { 00:19:35.200 "impl_name": "posix", 00:19:35.200 "recv_buf_size": 2097152, 00:19:35.200 "send_buf_size": 2097152, 00:19:35.200 "enable_recv_pipe": true, 00:19:35.200 "enable_quickack": false, 00:19:35.200 "enable_placement_id": 0, 00:19:35.200 "enable_zerocopy_send_server": true, 00:19:35.200 "enable_zerocopy_send_client": false, 00:19:35.200 "zerocopy_threshold": 0, 00:19:35.200 "tls_version": 0, 00:19:35.200 "enable_ktls": false 00:19:35.200 } 00:19:35.200 } 00:19:35.200 ] 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "subsystem": "vmd", 00:19:35.200 "config": [] 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "subsystem": "accel", 00:19:35.200 "config": [ 00:19:35.200 { 00:19:35.200 "method": "accel_set_options", 00:19:35.200 "params": { 00:19:35.200 "small_cache_size": 128, 00:19:35.200 "large_cache_size": 16, 00:19:35.200 "task_count": 2048, 00:19:35.200 "sequence_count": 2048, 00:19:35.200 "buf_count": 2048 00:19:35.200 } 00:19:35.200 } 00:19:35.200 ] 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "subsystem": "bdev", 00:19:35.200 "config": [ 00:19:35.200 { 00:19:35.200 "method": "bdev_set_options", 00:19:35.200 "params": { 00:19:35.200 "bdev_io_pool_size": 65535, 00:19:35.200 "bdev_io_cache_size": 256, 00:19:35.200 "bdev_auto_examine": true, 00:19:35.200 "iobuf_small_cache_size": 128, 00:19:35.200 "iobuf_large_cache_size": 16 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "bdev_raid_set_options", 00:19:35.200 "params": { 00:19:35.200 "process_window_size_kb": 1024, 00:19:35.200 "process_max_bandwidth_mb_sec": 0 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "bdev_iscsi_set_options", 00:19:35.200 "params": { 00:19:35.200 "timeout_sec": 30 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "bdev_nvme_set_options", 00:19:35.200 "params": { 00:19:35.200 "action_on_timeout": "none", 00:19:35.200 "timeout_us": 0, 00:19:35.200 "timeout_admin_us": 0, 00:19:35.200 "keep_alive_timeout_ms": 10000, 00:19:35.200 "arbitration_burst": 0, 00:19:35.200 "low_priority_weight": 0, 00:19:35.200 "medium_priority_weight": 0, 00:19:35.200 "high_priority_weight": 0, 00:19:35.200 "nvme_adminq_poll_period_us": 10000, 00:19:35.200 "nvme_ioq_poll_period_us": 0, 00:19:35.200 "io_queue_requests": 0, 00:19:35.200 "delay_cmd_submit": true, 00:19:35.200 "transport_retry_count": 4, 00:19:35.200 "bdev_retry_count": 3, 00:19:35.200 "transport_ack_timeout": 0, 00:19:35.200 "ctrlr_loss_timeout_sec": 0, 00:19:35.200 "reconnect_delay_sec": 0, 00:19:35.200 "fast_io_fail_timeout_sec": 0, 00:19:35.200 "disable_auto_failback": false, 00:19:35.200 "generate_uuids": false, 00:19:35.200 "transport_tos": 0, 00:19:35.200 "nvme_error_stat": false, 00:19:35.200 "rdma_srq_size": 0, 00:19:35.200 "io_path_stat": false, 00:19:35.200 "allow_accel_sequence": false, 00:19:35.200 "rdma_max_cq_size": 0, 00:19:35.200 "rdma_cm_event_timeout_ms": 0, 00:19:35.200 "dhchap_digests": [ 00:19:35.200 "sha256", 00:19:35.200 "sha384", 00:19:35.200 "sha512" 00:19:35.200 ], 00:19:35.200 "dhchap_dhgroups": [ 00:19:35.200 "null", 00:19:35.200 "ffdhe2048", 00:19:35.200 "ffdhe3072", 00:19:35.200 "ffdhe4096", 00:19:35.200 "ffdhe6144", 00:19:35.200 "ffdhe8192" 00:19:35.200 ] 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "bdev_nvme_set_hotplug", 00:19:35.200 "params": { 00:19:35.200 "period_us": 100000, 00:19:35.200 "enable": false 00:19:35.200 } 00:19:35.200 }, 00:19:35.200 { 00:19:35.200 "method": "bdev_malloc_create", 00:19:35.200 "params": { 00:19:35.201 "name": "malloc0", 00:19:35.201 "num_blocks": 8192, 00:19:35.201 "block_size": 4096, 00:19:35.201 "physical_block_size": 4096, 00:19:35.201 "uuid": "92acc58b-cfeb-4143-85d1-1661e97e235c", 00:19:35.201 "optimal_io_boundary": 0, 00:19:35.201 "md_size": 0, 00:19:35.201 "dif_type": 0, 00:19:35.201 "dif_is_head_of_md": false, 00:19:35.201 "dif_pi_format": 0 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "bdev_wait_for_examine" 00:19:35.201 } 00:19:35.201 ] 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "subsystem": "nbd", 00:19:35.201 "config": [] 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "subsystem": "scheduler", 00:19:35.201 "config": [ 00:19:35.201 { 00:19:35.201 "method": "framework_set_scheduler", 00:19:35.201 "params": { 00:19:35.201 "name": "static" 00:19:35.201 } 00:19:35.201 } 00:19:35.201 ] 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "subsystem": "nvmf", 00:19:35.201 "config": [ 00:19:35.201 { 00:19:35.201 "method": "nvmf_set_config", 00:19:35.201 "params": { 00:19:35.201 "discovery_filter": "match_any", 00:19:35.201 "admin_cmd_passthru": { 00:19:35.201 "identify_ctrlr": false 00:19:35.201 }, 00:19:35.201 "dhchap_digests": [ 00:19:35.201 "sha256", 00:19:35.201 "sha384", 00:19:35.201 "sha512" 00:19:35.201 ], 00:19:35.201 "dhchap_dhgroups": [ 00:19:35.201 "null", 00:19:35.201 "ffdhe2048", 00:19:35.201 "ffdhe3072", 00:19:35.201 "ffdhe4096", 00:19:35.201 "ffdhe6144", 00:19:35.201 "ffdhe8192" 00:19:35.201 ] 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_set_max_subsystems", 00:19:35.201 "params": { 00:19:35.201 "max_subsystems": 1024 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_set_crdt", 00:19:35.201 "params": { 00:19:35.201 "crdt1": 0, 00:19:35.201 "crdt2": 0, 00:19:35.201 "crdt3": 0 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_create_transport", 00:19:35.201 "params": { 00:19:35.201 "trtype": "TCP", 00:19:35.201 "max_queue_depth": 128, 00:19:35.201 "max_io_qpairs_per_ctrlr": 127, 00:19:35.201 "in_capsule_data_size": 4096, 00:19:35.201 "max_io_size": 131072, 00:19:35.201 "io_unit_size": 131072, 00:19:35.201 "max_aq_depth": 128, 00:19:35.201 "num_shared_buffers": 511, 00:19:35.201 "buf_cache_size": 4294967295, 00:19:35.201 "dif_insert_or_strip": false, 00:19:35.201 "zcopy": false, 00:19:35.201 "c2h_success": false, 00:19:35.201 "sock_priority": 0, 00:19:35.201 "abort_timeout_sec": 1, 00:19:35.201 "ack_timeout": 0, 00:19:35.201 "data_wr_pool_size": 0 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_create_subsystem", 00:19:35.201 "params": { 00:19:35.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.201 "allow_any_host": false, 00:19:35.201 "serial_number": "SPDK00000000000001", 00:19:35.201 "model_number": "SPDK bdev Controller", 00:19:35.201 "max_namespaces": 10, 00:19:35.201 "min_cntlid": 1, 00:19:35.201 "max_cntlid": 65519, 00:19:35.201 "ana_reporting": false 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_subsystem_add_host", 00:19:35.201 "params": { 00:19:35.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.201 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.201 "psk": "key0" 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_subsystem_add_ns", 00:19:35.201 "params": { 00:19:35.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.201 "namespace": { 00:19:35.201 "nsid": 1, 00:19:35.201 "bdev_name": "malloc0", 00:19:35.201 "nguid": "92ACC58BCFEB414385D11661E97E235C", 00:19:35.201 "uuid": "92acc58b-cfeb-4143-85d1-1661e97e235c", 00:19:35.201 "no_auto_visible": false 00:19:35.201 } 00:19:35.201 } 00:19:35.201 }, 00:19:35.201 { 00:19:35.201 "method": "nvmf_subsystem_add_listener", 00:19:35.201 "params": { 00:19:35.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.201 "listen_address": { 00:19:35.201 "trtype": "TCP", 00:19:35.201 "adrfam": "IPv4", 00:19:35.201 "traddr": "10.0.0.2", 00:19:35.201 "trsvcid": "4420" 00:19:35.201 }, 00:19:35.201 "secure_channel": true 00:19:35.201 } 00:19:35.201 } 00:19:35.201 ] 00:19:35.201 } 00:19:35.201 ] 00:19:35.201 }' 00:19:35.201 19:26:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:35.461 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:35.461 "subsystems": [ 00:19:35.461 { 00:19:35.461 "subsystem": "keyring", 00:19:35.461 "config": [ 00:19:35.461 { 00:19:35.461 "method": "keyring_file_add_key", 00:19:35.461 "params": { 00:19:35.461 "name": "key0", 00:19:35.461 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:35.461 } 00:19:35.461 } 00:19:35.461 ] 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "subsystem": "iobuf", 00:19:35.461 "config": [ 00:19:35.461 { 00:19:35.461 "method": "iobuf_set_options", 00:19:35.461 "params": { 00:19:35.461 "small_pool_count": 8192, 00:19:35.461 "large_pool_count": 1024, 00:19:35.461 "small_bufsize": 8192, 00:19:35.461 "large_bufsize": 135168, 00:19:35.461 "enable_numa": false 00:19:35.461 } 00:19:35.461 } 00:19:35.461 ] 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "subsystem": "sock", 00:19:35.461 "config": [ 00:19:35.461 { 00:19:35.461 "method": "sock_set_default_impl", 00:19:35.461 "params": { 00:19:35.461 "impl_name": "posix" 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "sock_impl_set_options", 00:19:35.461 "params": { 00:19:35.461 "impl_name": "ssl", 00:19:35.461 "recv_buf_size": 4096, 00:19:35.461 "send_buf_size": 4096, 00:19:35.461 "enable_recv_pipe": true, 00:19:35.461 "enable_quickack": false, 00:19:35.461 "enable_placement_id": 0, 00:19:35.461 "enable_zerocopy_send_server": true, 00:19:35.461 "enable_zerocopy_send_client": false, 00:19:35.461 "zerocopy_threshold": 0, 00:19:35.461 "tls_version": 0, 00:19:35.461 "enable_ktls": false 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "sock_impl_set_options", 00:19:35.461 "params": { 00:19:35.461 "impl_name": "posix", 00:19:35.461 "recv_buf_size": 2097152, 00:19:35.461 "send_buf_size": 2097152, 00:19:35.461 "enable_recv_pipe": true, 00:19:35.461 "enable_quickack": false, 00:19:35.461 "enable_placement_id": 0, 00:19:35.461 "enable_zerocopy_send_server": true, 00:19:35.461 "enable_zerocopy_send_client": false, 00:19:35.461 "zerocopy_threshold": 0, 00:19:35.461 "tls_version": 0, 00:19:35.461 "enable_ktls": false 00:19:35.461 } 00:19:35.461 } 00:19:35.461 ] 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "subsystem": "vmd", 00:19:35.461 "config": [] 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "subsystem": "accel", 00:19:35.461 "config": [ 00:19:35.461 { 00:19:35.461 "method": "accel_set_options", 00:19:35.461 "params": { 00:19:35.461 "small_cache_size": 128, 00:19:35.461 "large_cache_size": 16, 00:19:35.461 "task_count": 2048, 00:19:35.461 "sequence_count": 2048, 00:19:35.461 "buf_count": 2048 00:19:35.461 } 00:19:35.461 } 00:19:35.461 ] 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "subsystem": "bdev", 00:19:35.461 "config": [ 00:19:35.461 { 00:19:35.461 "method": "bdev_set_options", 00:19:35.461 "params": { 00:19:35.461 "bdev_io_pool_size": 65535, 00:19:35.461 "bdev_io_cache_size": 256, 00:19:35.461 "bdev_auto_examine": true, 00:19:35.461 "iobuf_small_cache_size": 128, 00:19:35.461 "iobuf_large_cache_size": 16 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "bdev_raid_set_options", 00:19:35.461 "params": { 00:19:35.461 "process_window_size_kb": 1024, 00:19:35.461 "process_max_bandwidth_mb_sec": 0 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "bdev_iscsi_set_options", 00:19:35.461 "params": { 00:19:35.461 "timeout_sec": 30 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "bdev_nvme_set_options", 00:19:35.461 "params": { 00:19:35.461 "action_on_timeout": "none", 00:19:35.461 "timeout_us": 0, 00:19:35.461 "timeout_admin_us": 0, 00:19:35.461 "keep_alive_timeout_ms": 10000, 00:19:35.461 "arbitration_burst": 0, 00:19:35.461 "low_priority_weight": 0, 00:19:35.461 "medium_priority_weight": 0, 00:19:35.461 "high_priority_weight": 0, 00:19:35.461 "nvme_adminq_poll_period_us": 10000, 00:19:35.461 "nvme_ioq_poll_period_us": 0, 00:19:35.461 "io_queue_requests": 512, 00:19:35.461 "delay_cmd_submit": true, 00:19:35.461 "transport_retry_count": 4, 00:19:35.461 "bdev_retry_count": 3, 00:19:35.461 "transport_ack_timeout": 0, 00:19:35.461 "ctrlr_loss_timeout_sec": 0, 00:19:35.461 "reconnect_delay_sec": 0, 00:19:35.461 "fast_io_fail_timeout_sec": 0, 00:19:35.461 "disable_auto_failback": false, 00:19:35.461 "generate_uuids": false, 00:19:35.461 "transport_tos": 0, 00:19:35.461 "nvme_error_stat": false, 00:19:35.461 "rdma_srq_size": 0, 00:19:35.461 "io_path_stat": false, 00:19:35.461 "allow_accel_sequence": false, 00:19:35.461 "rdma_max_cq_size": 0, 00:19:35.461 "rdma_cm_event_timeout_ms": 0, 00:19:35.461 "dhchap_digests": [ 00:19:35.461 "sha256", 00:19:35.461 "sha384", 00:19:35.461 "sha512" 00:19:35.461 ], 00:19:35.461 "dhchap_dhgroups": [ 00:19:35.461 "null", 00:19:35.461 "ffdhe2048", 00:19:35.461 "ffdhe3072", 00:19:35.461 "ffdhe4096", 00:19:35.461 "ffdhe6144", 00:19:35.461 "ffdhe8192" 00:19:35.461 ] 00:19:35.461 } 00:19:35.461 }, 00:19:35.461 { 00:19:35.461 "method": "bdev_nvme_attach_controller", 00:19:35.461 "params": { 00:19:35.461 "name": "TLSTEST", 00:19:35.461 "trtype": "TCP", 00:19:35.461 "adrfam": "IPv4", 00:19:35.461 "traddr": "10.0.0.2", 00:19:35.461 "trsvcid": "4420", 00:19:35.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.461 "prchk_reftag": false, 00:19:35.461 "prchk_guard": false, 00:19:35.461 "ctrlr_loss_timeout_sec": 0, 00:19:35.461 "reconnect_delay_sec": 0, 00:19:35.461 "fast_io_fail_timeout_sec": 0, 00:19:35.461 "psk": "key0", 00:19:35.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.461 "hdgst": false, 00:19:35.461 "ddgst": false, 00:19:35.461 "multipath": "multipath" 00:19:35.461 } 00:19:35.461 }, 00:19:35.462 { 00:19:35.462 "method": "bdev_nvme_set_hotplug", 00:19:35.462 "params": { 00:19:35.462 "period_us": 100000, 00:19:35.462 "enable": false 00:19:35.462 } 00:19:35.462 }, 00:19:35.462 { 00:19:35.462 "method": "bdev_wait_for_examine" 00:19:35.462 } 00:19:35.462 ] 00:19:35.462 }, 00:19:35.462 { 00:19:35.462 "subsystem": "nbd", 00:19:35.462 "config": [] 00:19:35.462 } 00:19:35.462 ] 00:19:35.462 }' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2121939 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2121939 ']' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2121939 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121939 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121939' 00:19:35.462 killing process with pid 2121939 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2121939 00:19:35.462 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.462 00:19:35.462 Latency(us) 00:19:35.462 [2024-10-17T17:26:59.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.462 [2024-10-17T17:26:59.246Z] =================================================================================================================== 00:19:35.462 [2024-10-17T17:26:59.246Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2121939 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2121683 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2121683 ']' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2121683 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:35.462 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2121683 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2121683' 00:19:35.722 killing process with pid 2121683 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2121683 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2121683 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.722 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:35.722 "subsystems": [ 00:19:35.722 { 00:19:35.722 "subsystem": "keyring", 00:19:35.722 "config": [ 00:19:35.722 { 00:19:35.722 "method": "keyring_file_add_key", 00:19:35.722 "params": { 00:19:35.722 "name": "key0", 00:19:35.722 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:35.722 } 00:19:35.722 } 00:19:35.722 ] 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "subsystem": "iobuf", 00:19:35.722 "config": [ 00:19:35.722 { 00:19:35.722 "method": "iobuf_set_options", 00:19:35.722 "params": { 00:19:35.722 "small_pool_count": 8192, 00:19:35.722 "large_pool_count": 1024, 00:19:35.722 "small_bufsize": 8192, 00:19:35.722 "large_bufsize": 135168, 00:19:35.722 "enable_numa": false 00:19:35.722 } 00:19:35.722 } 00:19:35.722 ] 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "subsystem": "sock", 00:19:35.722 "config": [ 00:19:35.722 { 00:19:35.722 "method": "sock_set_default_impl", 00:19:35.722 "params": { 00:19:35.722 "impl_name": "posix" 00:19:35.722 } 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "method": "sock_impl_set_options", 00:19:35.722 "params": { 00:19:35.722 "impl_name": "ssl", 00:19:35.722 "recv_buf_size": 4096, 00:19:35.722 "send_buf_size": 4096, 00:19:35.722 "enable_recv_pipe": true, 00:19:35.722 "enable_quickack": false, 00:19:35.722 "enable_placement_id": 0, 00:19:35.722 "enable_zerocopy_send_server": true, 00:19:35.722 "enable_zerocopy_send_client": false, 00:19:35.722 "zerocopy_threshold": 0, 00:19:35.722 "tls_version": 0, 00:19:35.722 "enable_ktls": false 00:19:35.722 } 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "method": "sock_impl_set_options", 00:19:35.722 "params": { 00:19:35.722 "impl_name": "posix", 00:19:35.722 "recv_buf_size": 2097152, 00:19:35.722 "send_buf_size": 2097152, 00:19:35.722 "enable_recv_pipe": true, 00:19:35.722 "enable_quickack": false, 00:19:35.722 "enable_placement_id": 0, 00:19:35.722 "enable_zerocopy_send_server": true, 00:19:35.722 "enable_zerocopy_send_client": false, 00:19:35.722 "zerocopy_threshold": 0, 00:19:35.722 "tls_version": 0, 00:19:35.722 "enable_ktls": false 00:19:35.722 } 00:19:35.722 } 00:19:35.722 ] 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "subsystem": "vmd", 00:19:35.722 "config": [] 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "subsystem": "accel", 00:19:35.722 "config": [ 00:19:35.722 { 00:19:35.722 "method": "accel_set_options", 00:19:35.722 "params": { 00:19:35.722 "small_cache_size": 128, 00:19:35.722 "large_cache_size": 16, 00:19:35.722 "task_count": 2048, 00:19:35.722 "sequence_count": 2048, 00:19:35.722 "buf_count": 2048 00:19:35.722 } 00:19:35.722 } 00:19:35.722 ] 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "subsystem": "bdev", 00:19:35.722 "config": [ 00:19:35.722 { 00:19:35.722 "method": "bdev_set_options", 00:19:35.722 "params": { 00:19:35.722 "bdev_io_pool_size": 65535, 00:19:35.722 "bdev_io_cache_size": 256, 00:19:35.722 "bdev_auto_examine": true, 00:19:35.722 "iobuf_small_cache_size": 128, 00:19:35.722 "iobuf_large_cache_size": 16 00:19:35.722 } 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "method": "bdev_raid_set_options", 00:19:35.722 "params": { 00:19:35.722 "process_window_size_kb": 1024, 00:19:35.722 "process_max_bandwidth_mb_sec": 0 00:19:35.722 } 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "method": "bdev_iscsi_set_options", 00:19:35.722 "params": { 00:19:35.722 "timeout_sec": 30 00:19:35.722 } 00:19:35.722 }, 00:19:35.722 { 00:19:35.722 "method": "bdev_nvme_set_options", 00:19:35.722 "params": { 00:19:35.722 "action_on_timeout": "none", 00:19:35.722 "timeout_us": 0, 00:19:35.722 "timeout_admin_us": 0, 00:19:35.722 "keep_alive_timeout_ms": 10000, 00:19:35.722 "arbitration_burst": 0, 00:19:35.722 "low_priority_weight": 0, 00:19:35.722 "medium_priority_weight": 0, 00:19:35.722 "high_priority_weight": 0, 00:19:35.722 "nvme_adminq_poll_period_us": 10000, 00:19:35.722 "nvme_ioq_poll_period_us": 0, 00:19:35.722 "io_queue_requests": 0, 00:19:35.722 "delay_cmd_submit": true, 00:19:35.722 "transport_retry_count": 4, 00:19:35.723 "bdev_retry_count": 3, 00:19:35.723 "transport_ack_timeout": 0, 00:19:35.723 "ctrlr_loss_timeout_sec": 0, 00:19:35.723 "reconnect_delay_sec": 0, 00:19:35.723 "fast_io_fail_timeout_sec": 0, 00:19:35.723 "disable_auto_failback": false, 00:19:35.723 "generate_uuids": false, 00:19:35.723 "transport_tos": 0, 00:19:35.723 "nvme_error_stat": false, 00:19:35.723 "rdma_srq_size": 0, 00:19:35.723 "io_path_stat": false, 00:19:35.723 "allow_accel_sequence": false, 00:19:35.723 "rdma_max_cq_size": 0, 00:19:35.723 "rdma_cm_event_timeout_ms": 0, 00:19:35.723 "dhchap_digests": [ 00:19:35.723 "sha256", 00:19:35.723 "sha384", 00:19:35.723 "sha512" 00:19:35.723 ], 00:19:35.723 "dhchap_dhgroups": [ 00:19:35.723 "null", 00:19:35.723 "ffdhe2048", 00:19:35.723 "ffdhe3072", 00:19:35.723 "ffdhe4096", 00:19:35.723 "ffdhe6144", 00:19:35.723 "ffdhe8192" 00:19:35.723 ] 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "bdev_nvme_set_hotplug", 00:19:35.723 "params": { 00:19:35.723 "period_us": 100000, 00:19:35.723 "enable": false 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "bdev_malloc_create", 00:19:35.723 "params": { 00:19:35.723 "name": "malloc0", 00:19:35.723 "num_blocks": 8192, 00:19:35.723 "block_size": 4096, 00:19:35.723 "physical_block_size": 4096, 00:19:35.723 "uuid": "92acc58b-cfeb-4143-85d1-1661e97e235c", 00:19:35.723 "optimal_io_boundary": 0, 00:19:35.723 "md_size": 0, 00:19:35.723 "dif_type": 0, 00:19:35.723 "dif_is_head_of_md": false, 00:19:35.723 "dif_pi_format": 0 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "bdev_wait_for_examine" 00:19:35.723 } 00:19:35.723 ] 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "subsystem": "nbd", 00:19:35.723 "config": [] 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "subsystem": "scheduler", 00:19:35.723 "config": [ 00:19:35.723 { 00:19:35.723 "method": "framework_set_scheduler", 00:19:35.723 "params": { 00:19:35.723 "name": "static" 00:19:35.723 } 00:19:35.723 } 00:19:35.723 ] 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "subsystem": "nvmf", 00:19:35.723 "config": [ 00:19:35.723 { 00:19:35.723 "method": "nvmf_set_config", 00:19:35.723 "params": { 00:19:35.723 "discovery_filter": "match_any", 00:19:35.723 "admin_cmd_passthru": { 00:19:35.723 "identify_ctrlr": false 00:19:35.723 }, 00:19:35.723 "dhchap_digests": [ 00:19:35.723 "sha256", 00:19:35.723 "sha384", 00:19:35.723 "sha512" 00:19:35.723 ], 00:19:35.723 "dhchap_dhgroups": [ 00:19:35.723 "null", 00:19:35.723 "ffdhe2048", 00:19:35.723 "ffdhe3072", 00:19:35.723 "ffdhe4096", 00:19:35.723 "ffdhe6144", 00:19:35.723 "ffdhe8192" 00:19:35.723 ] 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_set_max_subsystems", 00:19:35.723 "params": { 00:19:35.723 "max_subsystems": 1024 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_set_crdt", 00:19:35.723 "params": { 00:19:35.723 "crdt1": 0, 00:19:35.723 "crdt2": 0, 00:19:35.723 "crdt3": 0 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_create_transport", 00:19:35.723 "params": { 00:19:35.723 "trtype": "TCP", 00:19:35.723 "max_queue_depth": 128, 00:19:35.723 "max_io_qpairs_per_ctrlr": 127, 00:19:35.723 "in_capsule_data_size": 4096, 00:19:35.723 "max_io_size": 131072, 00:19:35.723 "io_unit_size": 131072, 00:19:35.723 "max_aq_depth": 128, 00:19:35.723 "num_shared_buffers": 511, 00:19:35.723 "buf_cache_size": 4294967295, 00:19:35.723 "dif_insert_or_strip": false, 00:19:35.723 "zcopy": false, 00:19:35.723 "c2h_success": false, 00:19:35.723 "sock_priority": 0, 00:19:35.723 "abort_timeout_sec": 1, 00:19:35.723 "ack_timeout": 0, 00:19:35.723 "data_wr_pool_size": 0 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_create_subsystem", 00:19:35.723 "params": { 00:19:35.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.723 "allow_any_host": false, 00:19:35.723 "serial_number": "SPDK00000000000001", 00:19:35.723 "model_number": "SPDK bdev Controller", 00:19:35.723 "max_namespaces": 10, 00:19:35.723 "min_cntlid": 1, 00:19:35.723 "max_cntlid": 65519, 00:19:35.723 "ana_reporting": false 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_subsystem_add_host", 00:19:35.723 "params": { 00:19:35.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.723 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.723 "psk": "key0" 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_subsystem_add_ns", 00:19:35.723 "params": { 00:19:35.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.723 "namespace": { 00:19:35.723 "nsid": 1, 00:19:35.723 "bdev_name": "malloc0", 00:19:35.723 "nguid": "92ACC58BCFEB414385D11661E97E235C", 00:19:35.723 "uuid": "92acc58b-cfeb-4143-85d1-1661e97e235c", 00:19:35.723 "no_auto_visible": false 00:19:35.723 } 00:19:35.723 } 00:19:35.723 }, 00:19:35.723 { 00:19:35.723 "method": "nvmf_subsystem_add_listener", 00:19:35.723 "params": { 00:19:35.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.723 "listen_address": { 00:19:35.723 "trtype": "TCP", 00:19:35.723 "adrfam": "IPv4", 00:19:35.723 "traddr": "10.0.0.2", 00:19:35.723 "trsvcid": "4420" 00:19:35.723 }, 00:19:35.723 "secure_channel": true 00:19:35.723 } 00:19:35.723 } 00:19:35.723 ] 00:19:35.723 } 00:19:35.723 ] 00:19:35.723 }' 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2122209 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2122209 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2122209 ']' 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.723 19:26:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.723 [2024-10-17 19:26:59.496525] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:35.724 [2024-10-17 19:26:59.496570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.983 [2024-10-17 19:26:59.576329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.983 [2024-10-17 19:26:59.615211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.983 [2024-10-17 19:26:59.615247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.983 [2024-10-17 19:26:59.615254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:35.983 [2024-10-17 19:26:59.615260] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:35.983 [2024-10-17 19:26:59.615265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.983 [2024-10-17 19:26:59.615852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.241 [2024-10-17 19:26:59.827451] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.241 [2024-10-17 19:26:59.859480] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.241 [2024-10-17 19:26:59.859696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2122460 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2122460 /var/tmp/bdevperf.sock 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2122460 ']' 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:36.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:36.809 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:36.809 "subsystems": [ 00:19:36.809 { 00:19:36.809 "subsystem": "keyring", 00:19:36.809 "config": [ 00:19:36.809 { 00:19:36.809 "method": "keyring_file_add_key", 00:19:36.809 "params": { 00:19:36.809 "name": "key0", 00:19:36.809 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:36.809 } 00:19:36.809 } 00:19:36.809 ] 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "subsystem": "iobuf", 00:19:36.809 "config": [ 00:19:36.809 { 00:19:36.809 "method": "iobuf_set_options", 00:19:36.809 "params": { 00:19:36.809 "small_pool_count": 8192, 00:19:36.809 "large_pool_count": 1024, 00:19:36.809 "small_bufsize": 8192, 00:19:36.809 "large_bufsize": 135168, 00:19:36.809 "enable_numa": false 00:19:36.809 } 00:19:36.809 } 00:19:36.809 ] 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "subsystem": "sock", 00:19:36.809 "config": [ 00:19:36.809 { 00:19:36.809 "method": "sock_set_default_impl", 00:19:36.809 "params": { 00:19:36.809 "impl_name": "posix" 00:19:36.809 } 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "method": "sock_impl_set_options", 00:19:36.809 "params": { 00:19:36.809 "impl_name": "ssl", 00:19:36.809 "recv_buf_size": 4096, 00:19:36.809 "send_buf_size": 4096, 00:19:36.809 "enable_recv_pipe": true, 00:19:36.809 "enable_quickack": false, 00:19:36.809 "enable_placement_id": 0, 00:19:36.809 "enable_zerocopy_send_server": true, 00:19:36.809 "enable_zerocopy_send_client": false, 00:19:36.809 "zerocopy_threshold": 0, 00:19:36.809 "tls_version": 0, 00:19:36.809 "enable_ktls": false 00:19:36.809 } 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "method": "sock_impl_set_options", 00:19:36.809 "params": { 00:19:36.809 "impl_name": "posix", 00:19:36.809 "recv_buf_size": 2097152, 00:19:36.809 "send_buf_size": 2097152, 00:19:36.809 "enable_recv_pipe": true, 00:19:36.809 "enable_quickack": false, 00:19:36.809 "enable_placement_id": 0, 00:19:36.809 "enable_zerocopy_send_server": true, 00:19:36.809 "enable_zerocopy_send_client": false, 00:19:36.809 "zerocopy_threshold": 0, 00:19:36.809 "tls_version": 0, 00:19:36.809 "enable_ktls": false 00:19:36.809 } 00:19:36.809 } 00:19:36.809 ] 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "subsystem": "vmd", 00:19:36.809 "config": [] 00:19:36.809 }, 00:19:36.809 { 00:19:36.809 "subsystem": "accel", 00:19:36.809 "config": [ 00:19:36.809 { 00:19:36.809 "method": "accel_set_options", 00:19:36.809 "params": { 00:19:36.809 "small_cache_size": 128, 00:19:36.809 "large_cache_size": 16, 00:19:36.809 "task_count": 2048, 00:19:36.810 "sequence_count": 2048, 00:19:36.810 "buf_count": 2048 00:19:36.810 } 00:19:36.810 } 00:19:36.810 ] 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "subsystem": "bdev", 00:19:36.810 "config": [ 00:19:36.810 { 00:19:36.810 "method": "bdev_set_options", 00:19:36.810 "params": { 00:19:36.810 "bdev_io_pool_size": 65535, 00:19:36.810 "bdev_io_cache_size": 256, 00:19:36.810 "bdev_auto_examine": true, 00:19:36.810 "iobuf_small_cache_size": 128, 00:19:36.810 "iobuf_large_cache_size": 16 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_raid_set_options", 00:19:36.810 "params": { 00:19:36.810 "process_window_size_kb": 1024, 00:19:36.810 "process_max_bandwidth_mb_sec": 0 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_iscsi_set_options", 00:19:36.810 "params": { 00:19:36.810 "timeout_sec": 30 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_nvme_set_options", 00:19:36.810 "params": { 00:19:36.810 "action_on_timeout": "none", 00:19:36.810 "timeout_us": 0, 00:19:36.810 "timeout_admin_us": 0, 00:19:36.810 "keep_alive_timeout_ms": 10000, 00:19:36.810 "arbitration_burst": 0, 00:19:36.810 "low_priority_weight": 0, 00:19:36.810 "medium_priority_weight": 0, 00:19:36.810 "high_priority_weight": 0, 00:19:36.810 "nvme_adminq_poll_period_us": 10000, 00:19:36.810 "nvme_ioq_poll_period_us": 0, 00:19:36.810 "io_queue_requests": 512, 00:19:36.810 "delay_cmd_submit": true, 00:19:36.810 "transport_retry_count": 4, 00:19:36.810 "bdev_retry_count": 3, 00:19:36.810 "transport_ack_timeout": 0, 00:19:36.810 "ctrlr_loss_timeout_sec": 0, 00:19:36.810 "reconnect_delay_sec": 0, 00:19:36.810 "fast_io_fail_timeout_sec": 0, 00:19:36.810 "disable_auto_failback": false, 00:19:36.810 "generate_uuids": false, 00:19:36.810 "transport_tos": 0, 00:19:36.810 "nvme_error_stat": false, 00:19:36.810 "rdma_srq_size": 0, 00:19:36.810 "io_path_stat": false, 00:19:36.810 "allow_accel_sequence": false, 00:19:36.810 "rdma_max_cq_size": 0, 00:19:36.810 "rdma_cm_event_timeout_ms": 0, 00:19:36.810 "dhchap_digests": [ 00:19:36.810 "sha256", 00:19:36.810 "sha384", 00:19:36.810 "sha512" 00:19:36.810 ], 00:19:36.810 "dhchap_dhgroups": [ 00:19:36.810 "null", 00:19:36.810 "ffdhe2048", 00:19:36.810 "ffdhe3072", 00:19:36.810 "ffdhe4096", 00:19:36.810 "ffdhe6144", 00:19:36.810 "ffdhe8192" 00:19:36.810 ] 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_nvme_attach_controller", 00:19:36.810 "params": { 00:19:36.810 "name": "TLSTEST", 00:19:36.810 "trtype": "TCP", 00:19:36.810 "adrfam": "IPv4", 00:19:36.810 "traddr": "10.0.0.2", 00:19:36.810 "trsvcid": "4420", 00:19:36.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.810 "prchk_reftag": false, 00:19:36.810 "prchk_guard": false, 00:19:36.810 "ctrlr_loss_timeout_sec": 0, 00:19:36.810 "reconnect_delay_sec": 0, 00:19:36.810 "fast_io_fail_timeout_sec": 0, 00:19:36.810 "psk": "key0", 00:19:36.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.810 "hdgst": false, 00:19:36.810 "ddgst": false, 00:19:36.810 "multipath": "multipath" 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_nvme_set_hotplug", 00:19:36.810 "params": { 00:19:36.810 "period_us": 100000, 00:19:36.810 "enable": false 00:19:36.810 } 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "method": "bdev_wait_for_examine" 00:19:36.810 } 00:19:36.810 ] 00:19:36.810 }, 00:19:36.810 { 00:19:36.810 "subsystem": "nbd", 00:19:36.810 "config": [] 00:19:36.810 } 00:19:36.810 ] 00:19:36.810 }' 00:19:36.810 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:36.810 19:27:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:36.810 [2024-10-17 19:27:00.436383] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:36.810 [2024-10-17 19:27:00.436436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122460 ] 00:19:36.810 [2024-10-17 19:27:00.517114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.810 [2024-10-17 19:27:00.559320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.069 [2024-10-17 19:27:00.711231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.637 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:37.637 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:37.637 19:27:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.637 Running I/O for 10 seconds... 00:19:39.954 5310.00 IOPS, 20.74 MiB/s [2024-10-17T17:27:04.675Z] 5403.00 IOPS, 21.11 MiB/s [2024-10-17T17:27:05.613Z] 5441.67 IOPS, 21.26 MiB/s [2024-10-17T17:27:06.552Z] 5465.75 IOPS, 21.35 MiB/s [2024-10-17T17:27:07.489Z] 5456.20 IOPS, 21.31 MiB/s [2024-10-17T17:27:08.427Z] 5464.33 IOPS, 21.35 MiB/s [2024-10-17T17:27:09.806Z] 5497.14 IOPS, 21.47 MiB/s [2024-10-17T17:27:10.745Z] 5508.88 IOPS, 21.52 MiB/s [2024-10-17T17:27:11.683Z] 5505.89 IOPS, 21.51 MiB/s [2024-10-17T17:27:11.683Z] 5519.00 IOPS, 21.56 MiB/s 00:19:47.899 Latency(us) 00:19:47.899 [2024-10-17T17:27:11.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.899 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.899 Verification LBA range: start 0x0 length 0x2000 00:19:47.899 TLSTESTn1 : 10.01 5524.50 21.58 0.00 0.00 23135.94 5367.71 56423.38 00:19:47.899 [2024-10-17T17:27:11.683Z] =================================================================================================================== 00:19:47.899 [2024-10-17T17:27:11.683Z] Total : 5524.50 21.58 0.00 0.00 23135.94 5367.71 56423.38 00:19:47.899 { 00:19:47.899 "results": [ 00:19:47.899 { 00:19:47.899 "job": "TLSTESTn1", 00:19:47.899 "core_mask": "0x4", 00:19:47.899 "workload": "verify", 00:19:47.899 "status": "finished", 00:19:47.899 "verify_range": { 00:19:47.899 "start": 0, 00:19:47.899 "length": 8192 00:19:47.899 }, 00:19:47.899 "queue_depth": 128, 00:19:47.899 "io_size": 4096, 00:19:47.899 "runtime": 10.013037, 00:19:47.899 "iops": 5524.4977123324325, 00:19:47.899 "mibps": 21.580069188798564, 00:19:47.899 "io_failed": 0, 00:19:47.899 "io_timeout": 0, 00:19:47.899 "avg_latency_us": 23135.943016570294, 00:19:47.899 "min_latency_us": 5367.710476190477, 00:19:47.899 "max_latency_us": 56423.375238095236 00:19:47.899 } 00:19:47.899 ], 00:19:47.899 "core_count": 1 00:19:47.899 } 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2122460 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2122460 ']' 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2122460 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2122460 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2122460' 00:19:47.899 killing process with pid 2122460 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2122460 00:19:47.899 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.899 00:19:47.899 Latency(us) 00:19:47.899 [2024-10-17T17:27:11.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.899 [2024-10-17T17:27:11.683Z] =================================================================================================================== 00:19:47.899 [2024-10-17T17:27:11.683Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2122460 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2122209 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2122209 ']' 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2122209 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.899 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2122209 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2122209' 00:19:48.158 killing process with pid 2122209 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2122209 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2122209 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2124787 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2124787 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2124787 ']' 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:48.158 19:27:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.158 [2024-10-17 19:27:11.922900] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:48.158 [2024-10-17 19:27:11.922951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.417 [2024-10-17 19:27:12.003938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.417 [2024-10-17 19:27:12.041574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.417 [2024-10-17 19:27:12.041611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.417 [2024-10-17 19:27:12.041618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.417 [2024-10-17 19:27:12.041623] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.417 [2024-10-17 19:27:12.041628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.417 [2024-10-17 19:27:12.042147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.986 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:48.986 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:48.986 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:48.986 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:48.986 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.244 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.244 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5XHsGmfKEg 00:19:49.244 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5XHsGmfKEg 00:19:49.244 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.244 [2024-10-17 19:27:12.958445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.244 19:27:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.503 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.767 [2024-10-17 19:27:13.347438] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.767 [2024-10-17 19:27:13.347666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.767 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:50.027 malloc0 00:19:50.027 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:50.027 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:50.286 19:27:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2125269 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2125269 /var/tmp/bdevperf.sock 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2125269 ']' 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.546 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.546 [2024-10-17 19:27:14.136038] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:50.546 [2024-10-17 19:27:14.136085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125269 ] 00:19:50.546 [2024-10-17 19:27:14.206681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.546 [2024-10-17 19:27:14.246587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.807 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.807 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.807 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:50.807 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:51.066 [2024-10-17 19:27:14.705631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.066 nvme0n1 00:19:51.066 19:27:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.326 Running I/O for 1 seconds... 00:19:52.266 5407.00 IOPS, 21.12 MiB/s 00:19:52.266 Latency(us) 00:19:52.266 [2024-10-17T17:27:16.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.266 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.266 Verification LBA range: start 0x0 length 0x2000 00:19:52.266 nvme0n1 : 1.01 5453.31 21.30 0.00 0.00 23290.24 5710.99 37948.46 00:19:52.266 [2024-10-17T17:27:16.050Z] =================================================================================================================== 00:19:52.266 [2024-10-17T17:27:16.050Z] Total : 5453.31 21.30 0.00 0.00 23290.24 5710.99 37948.46 00:19:52.266 { 00:19:52.266 "results": [ 00:19:52.266 { 00:19:52.266 "job": "nvme0n1", 00:19:52.266 "core_mask": "0x2", 00:19:52.266 "workload": "verify", 00:19:52.266 "status": "finished", 00:19:52.266 "verify_range": { 00:19:52.266 "start": 0, 00:19:52.266 "length": 8192 00:19:52.266 }, 00:19:52.266 "queue_depth": 128, 00:19:52.266 "io_size": 4096, 00:19:52.266 "runtime": 1.014979, 00:19:52.266 "iops": 5453.31479764606, 00:19:52.266 "mibps": 21.30201092830492, 00:19:52.266 "io_failed": 0, 00:19:52.266 "io_timeout": 0, 00:19:52.266 "avg_latency_us": 23290.23763788876, 00:19:52.266 "min_latency_us": 5710.994285714286, 00:19:52.266 "max_latency_us": 37948.46476190476 00:19:52.266 } 00:19:52.266 ], 00:19:52.266 "core_count": 1 00:19:52.266 } 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2125269 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2125269 ']' 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2125269 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2125269 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2125269' 00:19:52.266 killing process with pid 2125269 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2125269 00:19:52.266 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.266 00:19:52.266 Latency(us) 00:19:52.266 [2024-10-17T17:27:16.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.266 [2024-10-17T17:27:16.050Z] =================================================================================================================== 00:19:52.266 [2024-10-17T17:27:16.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.266 19:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2125269 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2124787 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2124787 ']' 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2124787 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2124787 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2124787' 00:19:52.526 killing process with pid 2124787 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2124787 00:19:52.526 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2124787 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2125525 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2125525 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2125525 ']' 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.786 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.786 [2024-10-17 19:27:16.404689] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:52.786 [2024-10-17 19:27:16.404739] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.786 [2024-10-17 19:27:16.480569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.786 [2024-10-17 19:27:16.520991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.786 [2024-10-17 19:27:16.521028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.786 [2024-10-17 19:27:16.521036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.786 [2024-10-17 19:27:16.521042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.786 [2024-10-17 19:27:16.521047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.786 [2024-10-17 19:27:16.521626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.046 [2024-10-17 19:27:16.663862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.046 malloc0 00:19:53.046 [2024-10-17 19:27:16.691866] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.046 [2024-10-17 19:27:16.692080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2125679 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2125679 /var/tmp/bdevperf.sock 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2125679 ']' 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.046 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.046 [2024-10-17 19:27:16.768496] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:53.046 [2024-10-17 19:27:16.768538] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125679 ] 00:19:53.306 [2024-10-17 19:27:16.843802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.306 [2024-10-17 19:27:16.885784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.306 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.306 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.306 19:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5XHsGmfKEg 00:19:53.567 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:53.567 [2024-10-17 19:27:17.341443] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.827 nvme0n1 00:19:53.827 19:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:53.827 Running I/O for 1 seconds... 00:19:54.766 5397.00 IOPS, 21.08 MiB/s 00:19:54.766 Latency(us) 00:19:54.766 [2024-10-17T17:27:18.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.766 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:54.766 Verification LBA range: start 0x0 length 0x2000 00:19:54.766 nvme0n1 : 1.01 5453.60 21.30 0.00 0.00 23313.63 5211.67 18974.23 00:19:54.766 [2024-10-17T17:27:18.550Z] =================================================================================================================== 00:19:54.766 [2024-10-17T17:27:18.550Z] Total : 5453.60 21.30 0.00 0.00 23313.63 5211.67 18974.23 00:19:54.766 { 00:19:54.766 "results": [ 00:19:54.766 { 00:19:54.766 "job": "nvme0n1", 00:19:54.766 "core_mask": "0x2", 00:19:54.766 "workload": "verify", 00:19:54.766 "status": "finished", 00:19:54.766 "verify_range": { 00:19:54.766 "start": 0, 00:19:54.766 "length": 8192 00:19:54.766 }, 00:19:54.766 "queue_depth": 128, 00:19:54.766 "io_size": 4096, 00:19:54.766 "runtime": 1.013092, 00:19:54.766 "iops": 5453.60144981897, 00:19:54.766 "mibps": 21.30313066335535, 00:19:54.766 "io_failed": 0, 00:19:54.766 "io_timeout": 0, 00:19:54.766 "avg_latency_us": 23313.63345210084, 00:19:54.766 "min_latency_us": 5211.672380952381, 00:19:54.766 "max_latency_us": 18974.23238095238 00:19:54.766 } 00:19:54.766 ], 00:19:54.766 "core_count": 1 00:19:54.766 } 00:19:55.027 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:55.027 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.027 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.027 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.027 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:55.027 "subsystems": [ 00:19:55.027 { 00:19:55.027 "subsystem": "keyring", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "keyring_file_add_key", 00:19:55.027 "params": { 00:19:55.027 "name": "key0", 00:19:55.027 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "iobuf", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "iobuf_set_options", 00:19:55.027 "params": { 00:19:55.027 "small_pool_count": 8192, 00:19:55.027 "large_pool_count": 1024, 00:19:55.027 "small_bufsize": 8192, 00:19:55.027 "large_bufsize": 135168, 00:19:55.027 "enable_numa": false 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "sock", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "sock_set_default_impl", 00:19:55.027 "params": { 00:19:55.027 "impl_name": "posix" 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "sock_impl_set_options", 00:19:55.027 "params": { 00:19:55.027 "impl_name": "ssl", 00:19:55.027 "recv_buf_size": 4096, 00:19:55.027 "send_buf_size": 4096, 00:19:55.027 "enable_recv_pipe": true, 00:19:55.027 "enable_quickack": false, 00:19:55.027 "enable_placement_id": 0, 00:19:55.027 "enable_zerocopy_send_server": true, 00:19:55.027 "enable_zerocopy_send_client": false, 00:19:55.027 "zerocopy_threshold": 0, 00:19:55.027 "tls_version": 0, 00:19:55.027 "enable_ktls": false 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "sock_impl_set_options", 00:19:55.027 "params": { 00:19:55.027 "impl_name": "posix", 00:19:55.027 "recv_buf_size": 2097152, 00:19:55.027 "send_buf_size": 2097152, 00:19:55.027 "enable_recv_pipe": true, 00:19:55.027 "enable_quickack": false, 00:19:55.027 "enable_placement_id": 0, 00:19:55.027 "enable_zerocopy_send_server": true, 00:19:55.027 "enable_zerocopy_send_client": false, 00:19:55.027 "zerocopy_threshold": 0, 00:19:55.027 "tls_version": 0, 00:19:55.027 "enable_ktls": false 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "vmd", 00:19:55.027 "config": [] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "accel", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "accel_set_options", 00:19:55.027 "params": { 00:19:55.027 "small_cache_size": 128, 00:19:55.027 "large_cache_size": 16, 00:19:55.027 "task_count": 2048, 00:19:55.027 "sequence_count": 2048, 00:19:55.027 "buf_count": 2048 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "bdev", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "bdev_set_options", 00:19:55.027 "params": { 00:19:55.027 "bdev_io_pool_size": 65535, 00:19:55.027 "bdev_io_cache_size": 256, 00:19:55.027 "bdev_auto_examine": true, 00:19:55.027 "iobuf_small_cache_size": 128, 00:19:55.027 "iobuf_large_cache_size": 16 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_raid_set_options", 00:19:55.027 "params": { 00:19:55.027 "process_window_size_kb": 1024, 00:19:55.027 "process_max_bandwidth_mb_sec": 0 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_iscsi_set_options", 00:19:55.027 "params": { 00:19:55.027 "timeout_sec": 30 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_nvme_set_options", 00:19:55.027 "params": { 00:19:55.027 "action_on_timeout": "none", 00:19:55.027 "timeout_us": 0, 00:19:55.027 "timeout_admin_us": 0, 00:19:55.027 "keep_alive_timeout_ms": 10000, 00:19:55.027 "arbitration_burst": 0, 00:19:55.027 "low_priority_weight": 0, 00:19:55.027 "medium_priority_weight": 0, 00:19:55.027 "high_priority_weight": 0, 00:19:55.027 "nvme_adminq_poll_period_us": 10000, 00:19:55.027 "nvme_ioq_poll_period_us": 0, 00:19:55.027 "io_queue_requests": 0, 00:19:55.027 "delay_cmd_submit": true, 00:19:55.027 "transport_retry_count": 4, 00:19:55.027 "bdev_retry_count": 3, 00:19:55.027 "transport_ack_timeout": 0, 00:19:55.027 "ctrlr_loss_timeout_sec": 0, 00:19:55.027 "reconnect_delay_sec": 0, 00:19:55.027 "fast_io_fail_timeout_sec": 0, 00:19:55.027 "disable_auto_failback": false, 00:19:55.027 "generate_uuids": false, 00:19:55.027 "transport_tos": 0, 00:19:55.027 "nvme_error_stat": false, 00:19:55.027 "rdma_srq_size": 0, 00:19:55.027 "io_path_stat": false, 00:19:55.027 "allow_accel_sequence": false, 00:19:55.027 "rdma_max_cq_size": 0, 00:19:55.027 "rdma_cm_event_timeout_ms": 0, 00:19:55.027 "dhchap_digests": [ 00:19:55.027 "sha256", 00:19:55.027 "sha384", 00:19:55.027 "sha512" 00:19:55.027 ], 00:19:55.027 "dhchap_dhgroups": [ 00:19:55.027 "null", 00:19:55.027 "ffdhe2048", 00:19:55.027 "ffdhe3072", 00:19:55.027 "ffdhe4096", 00:19:55.027 "ffdhe6144", 00:19:55.027 "ffdhe8192" 00:19:55.027 ] 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_nvme_set_hotplug", 00:19:55.027 "params": { 00:19:55.027 "period_us": 100000, 00:19:55.027 "enable": false 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_malloc_create", 00:19:55.027 "params": { 00:19:55.027 "name": "malloc0", 00:19:55.027 "num_blocks": 8192, 00:19:55.027 "block_size": 4096, 00:19:55.027 "physical_block_size": 4096, 00:19:55.027 "uuid": "46d2d1df-55fe-4114-8e4f-4314ba244ca8", 00:19:55.027 "optimal_io_boundary": 0, 00:19:55.027 "md_size": 0, 00:19:55.027 "dif_type": 0, 00:19:55.027 "dif_is_head_of_md": false, 00:19:55.027 "dif_pi_format": 0 00:19:55.027 } 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "method": "bdev_wait_for_examine" 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "nbd", 00:19:55.027 "config": [] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "scheduler", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "framework_set_scheduler", 00:19:55.027 "params": { 00:19:55.027 "name": "static" 00:19:55.027 } 00:19:55.027 } 00:19:55.027 ] 00:19:55.027 }, 00:19:55.027 { 00:19:55.027 "subsystem": "nvmf", 00:19:55.027 "config": [ 00:19:55.027 { 00:19:55.027 "method": "nvmf_set_config", 00:19:55.027 "params": { 00:19:55.027 "discovery_filter": "match_any", 00:19:55.027 "admin_cmd_passthru": { 00:19:55.027 "identify_ctrlr": false 00:19:55.027 }, 00:19:55.027 "dhchap_digests": [ 00:19:55.027 "sha256", 00:19:55.028 "sha384", 00:19:55.028 "sha512" 00:19:55.028 ], 00:19:55.028 "dhchap_dhgroups": [ 00:19:55.028 "null", 00:19:55.028 "ffdhe2048", 00:19:55.028 "ffdhe3072", 00:19:55.028 "ffdhe4096", 00:19:55.028 "ffdhe6144", 00:19:55.028 "ffdhe8192" 00:19:55.028 ] 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_set_max_subsystems", 00:19:55.028 "params": { 00:19:55.028 "max_subsystems": 1024 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_set_crdt", 00:19:55.028 "params": { 00:19:55.028 "crdt1": 0, 00:19:55.028 "crdt2": 0, 00:19:55.028 "crdt3": 0 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_create_transport", 00:19:55.028 "params": { 00:19:55.028 "trtype": "TCP", 00:19:55.028 "max_queue_depth": 128, 00:19:55.028 "max_io_qpairs_per_ctrlr": 127, 00:19:55.028 "in_capsule_data_size": 4096, 00:19:55.028 "max_io_size": 131072, 00:19:55.028 "io_unit_size": 131072, 00:19:55.028 "max_aq_depth": 128, 00:19:55.028 "num_shared_buffers": 511, 00:19:55.028 "buf_cache_size": 4294967295, 00:19:55.028 "dif_insert_or_strip": false, 00:19:55.028 "zcopy": false, 00:19:55.028 "c2h_success": false, 00:19:55.028 "sock_priority": 0, 00:19:55.028 "abort_timeout_sec": 1, 00:19:55.028 "ack_timeout": 0, 00:19:55.028 "data_wr_pool_size": 0 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_create_subsystem", 00:19:55.028 "params": { 00:19:55.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.028 "allow_any_host": false, 00:19:55.028 "serial_number": "00000000000000000000", 00:19:55.028 "model_number": "SPDK bdev Controller", 00:19:55.028 "max_namespaces": 32, 00:19:55.028 "min_cntlid": 1, 00:19:55.028 "max_cntlid": 65519, 00:19:55.028 "ana_reporting": false 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_subsystem_add_host", 00:19:55.028 "params": { 00:19:55.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.028 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.028 "psk": "key0" 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_subsystem_add_ns", 00:19:55.028 "params": { 00:19:55.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.028 "namespace": { 00:19:55.028 "nsid": 1, 00:19:55.028 "bdev_name": "malloc0", 00:19:55.028 "nguid": "46D2D1DF55FE41148E4F4314BA244CA8", 00:19:55.028 "uuid": "46d2d1df-55fe-4114-8e4f-4314ba244ca8", 00:19:55.028 "no_auto_visible": false 00:19:55.028 } 00:19:55.028 } 00:19:55.028 }, 00:19:55.028 { 00:19:55.028 "method": "nvmf_subsystem_add_listener", 00:19:55.028 "params": { 00:19:55.028 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.028 "listen_address": { 00:19:55.028 "trtype": "TCP", 00:19:55.028 "adrfam": "IPv4", 00:19:55.028 "traddr": "10.0.0.2", 00:19:55.028 "trsvcid": "4420" 00:19:55.028 }, 00:19:55.028 "secure_channel": false, 00:19:55.028 "sock_impl": "ssl" 00:19:55.028 } 00:19:55.028 } 00:19:55.028 ] 00:19:55.028 } 00:19:55.028 ] 00:19:55.028 }' 00:19:55.028 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:55.288 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:55.288 "subsystems": [ 00:19:55.288 { 00:19:55.288 "subsystem": "keyring", 00:19:55.288 "config": [ 00:19:55.288 { 00:19:55.288 "method": "keyring_file_add_key", 00:19:55.288 "params": { 00:19:55.288 "name": "key0", 00:19:55.288 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:55.288 } 00:19:55.288 } 00:19:55.288 ] 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "subsystem": "iobuf", 00:19:55.288 "config": [ 00:19:55.288 { 00:19:55.288 "method": "iobuf_set_options", 00:19:55.288 "params": { 00:19:55.288 "small_pool_count": 8192, 00:19:55.288 "large_pool_count": 1024, 00:19:55.288 "small_bufsize": 8192, 00:19:55.288 "large_bufsize": 135168, 00:19:55.288 "enable_numa": false 00:19:55.288 } 00:19:55.288 } 00:19:55.288 ] 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "subsystem": "sock", 00:19:55.288 "config": [ 00:19:55.288 { 00:19:55.288 "method": "sock_set_default_impl", 00:19:55.288 "params": { 00:19:55.288 "impl_name": "posix" 00:19:55.288 } 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "method": "sock_impl_set_options", 00:19:55.288 "params": { 00:19:55.288 "impl_name": "ssl", 00:19:55.288 "recv_buf_size": 4096, 00:19:55.288 "send_buf_size": 4096, 00:19:55.288 "enable_recv_pipe": true, 00:19:55.288 "enable_quickack": false, 00:19:55.288 "enable_placement_id": 0, 00:19:55.288 "enable_zerocopy_send_server": true, 00:19:55.288 "enable_zerocopy_send_client": false, 00:19:55.288 "zerocopy_threshold": 0, 00:19:55.288 "tls_version": 0, 00:19:55.288 "enable_ktls": false 00:19:55.288 } 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "method": "sock_impl_set_options", 00:19:55.288 "params": { 00:19:55.288 "impl_name": "posix", 00:19:55.288 "recv_buf_size": 2097152, 00:19:55.288 "send_buf_size": 2097152, 00:19:55.288 "enable_recv_pipe": true, 00:19:55.288 "enable_quickack": false, 00:19:55.288 "enable_placement_id": 0, 00:19:55.288 "enable_zerocopy_send_server": true, 00:19:55.288 "enable_zerocopy_send_client": false, 00:19:55.288 "zerocopy_threshold": 0, 00:19:55.288 "tls_version": 0, 00:19:55.288 "enable_ktls": false 00:19:55.288 } 00:19:55.288 } 00:19:55.288 ] 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "subsystem": "vmd", 00:19:55.288 "config": [] 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "subsystem": "accel", 00:19:55.288 "config": [ 00:19:55.288 { 00:19:55.288 "method": "accel_set_options", 00:19:55.288 "params": { 00:19:55.288 "small_cache_size": 128, 00:19:55.288 "large_cache_size": 16, 00:19:55.288 "task_count": 2048, 00:19:55.288 "sequence_count": 2048, 00:19:55.288 "buf_count": 2048 00:19:55.288 } 00:19:55.288 } 00:19:55.288 ] 00:19:55.288 }, 00:19:55.288 { 00:19:55.288 "subsystem": "bdev", 00:19:55.288 "config": [ 00:19:55.289 { 00:19:55.289 "method": "bdev_set_options", 00:19:55.289 "params": { 00:19:55.289 "bdev_io_pool_size": 65535, 00:19:55.289 "bdev_io_cache_size": 256, 00:19:55.289 "bdev_auto_examine": true, 00:19:55.289 "iobuf_small_cache_size": 128, 00:19:55.289 "iobuf_large_cache_size": 16 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_raid_set_options", 00:19:55.289 "params": { 00:19:55.289 "process_window_size_kb": 1024, 00:19:55.289 "process_max_bandwidth_mb_sec": 0 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_iscsi_set_options", 00:19:55.289 "params": { 00:19:55.289 "timeout_sec": 30 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_nvme_set_options", 00:19:55.289 "params": { 00:19:55.289 "action_on_timeout": "none", 00:19:55.289 "timeout_us": 0, 00:19:55.289 "timeout_admin_us": 0, 00:19:55.289 "keep_alive_timeout_ms": 10000, 00:19:55.289 "arbitration_burst": 0, 00:19:55.289 "low_priority_weight": 0, 00:19:55.289 "medium_priority_weight": 0, 00:19:55.289 "high_priority_weight": 0, 00:19:55.289 "nvme_adminq_poll_period_us": 10000, 00:19:55.289 "nvme_ioq_poll_period_us": 0, 00:19:55.289 "io_queue_requests": 512, 00:19:55.289 "delay_cmd_submit": true, 00:19:55.289 "transport_retry_count": 4, 00:19:55.289 "bdev_retry_count": 3, 00:19:55.289 "transport_ack_timeout": 0, 00:19:55.289 "ctrlr_loss_timeout_sec": 0, 00:19:55.289 "reconnect_delay_sec": 0, 00:19:55.289 "fast_io_fail_timeout_sec": 0, 00:19:55.289 "disable_auto_failback": false, 00:19:55.289 "generate_uuids": false, 00:19:55.289 "transport_tos": 0, 00:19:55.289 "nvme_error_stat": false, 00:19:55.289 "rdma_srq_size": 0, 00:19:55.289 "io_path_stat": false, 00:19:55.289 "allow_accel_sequence": false, 00:19:55.289 "rdma_max_cq_size": 0, 00:19:55.289 "rdma_cm_event_timeout_ms": 0, 00:19:55.289 "dhchap_digests": [ 00:19:55.289 "sha256", 00:19:55.289 "sha384", 00:19:55.289 "sha512" 00:19:55.289 ], 00:19:55.289 "dhchap_dhgroups": [ 00:19:55.289 "null", 00:19:55.289 "ffdhe2048", 00:19:55.289 "ffdhe3072", 00:19:55.289 "ffdhe4096", 00:19:55.289 "ffdhe6144", 00:19:55.289 "ffdhe8192" 00:19:55.289 ] 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_nvme_attach_controller", 00:19:55.289 "params": { 00:19:55.289 "name": "nvme0", 00:19:55.289 "trtype": "TCP", 00:19:55.289 "adrfam": "IPv4", 00:19:55.289 "traddr": "10.0.0.2", 00:19:55.289 "trsvcid": "4420", 00:19:55.289 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.289 "prchk_reftag": false, 00:19:55.289 "prchk_guard": false, 00:19:55.289 "ctrlr_loss_timeout_sec": 0, 00:19:55.289 "reconnect_delay_sec": 0, 00:19:55.289 "fast_io_fail_timeout_sec": 0, 00:19:55.289 "psk": "key0", 00:19:55.289 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.289 "hdgst": false, 00:19:55.289 "ddgst": false, 00:19:55.289 "multipath": "multipath" 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_nvme_set_hotplug", 00:19:55.289 "params": { 00:19:55.289 "period_us": 100000, 00:19:55.289 "enable": false 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_enable_histogram", 00:19:55.289 "params": { 00:19:55.289 "name": "nvme0n1", 00:19:55.289 "enable": true 00:19:55.289 } 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "method": "bdev_wait_for_examine" 00:19:55.289 } 00:19:55.289 ] 00:19:55.289 }, 00:19:55.289 { 00:19:55.289 "subsystem": "nbd", 00:19:55.289 "config": [] 00:19:55.289 } 00:19:55.289 ] 00:19:55.289 }' 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2125679 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2125679 ']' 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2125679 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2125679 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2125679' 00:19:55.289 killing process with pid 2125679 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2125679 00:19:55.289 Received shutdown signal, test time was about 1.000000 seconds 00:19:55.289 00:19:55.289 Latency(us) 00:19:55.289 [2024-10-17T17:27:19.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:55.289 [2024-10-17T17:27:19.073Z] =================================================================================================================== 00:19:55.289 [2024-10-17T17:27:19.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:55.289 19:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2125679 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2125525 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2125525 ']' 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2125525 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2125525 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2125525' 00:19:55.549 killing process with pid 2125525 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2125525 00:19:55.549 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2125525 00:19:55.810 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:55.810 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:55.810 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:55.810 "subsystems": [ 00:19:55.810 { 00:19:55.810 "subsystem": "keyring", 00:19:55.810 "config": [ 00:19:55.810 { 00:19:55.810 "method": "keyring_file_add_key", 00:19:55.810 "params": { 00:19:55.810 "name": "key0", 00:19:55.810 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:55.810 } 00:19:55.810 } 00:19:55.810 ] 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "subsystem": "iobuf", 00:19:55.810 "config": [ 00:19:55.810 { 00:19:55.810 "method": "iobuf_set_options", 00:19:55.810 "params": { 00:19:55.810 "small_pool_count": 8192, 00:19:55.810 "large_pool_count": 1024, 00:19:55.810 "small_bufsize": 8192, 00:19:55.810 "large_bufsize": 135168, 00:19:55.810 "enable_numa": false 00:19:55.810 } 00:19:55.810 } 00:19:55.810 ] 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "subsystem": "sock", 00:19:55.810 "config": [ 00:19:55.810 { 00:19:55.810 "method": "sock_set_default_impl", 00:19:55.810 "params": { 00:19:55.810 "impl_name": "posix" 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "sock_impl_set_options", 00:19:55.810 "params": { 00:19:55.810 "impl_name": "ssl", 00:19:55.810 "recv_buf_size": 4096, 00:19:55.810 "send_buf_size": 4096, 00:19:55.810 "enable_recv_pipe": true, 00:19:55.810 "enable_quickack": false, 00:19:55.810 "enable_placement_id": 0, 00:19:55.810 "enable_zerocopy_send_server": true, 00:19:55.810 "enable_zerocopy_send_client": false, 00:19:55.810 "zerocopy_threshold": 0, 00:19:55.810 "tls_version": 0, 00:19:55.810 "enable_ktls": false 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "sock_impl_set_options", 00:19:55.810 "params": { 00:19:55.810 "impl_name": "posix", 00:19:55.810 "recv_buf_size": 2097152, 00:19:55.810 "send_buf_size": 2097152, 00:19:55.810 "enable_recv_pipe": true, 00:19:55.810 "enable_quickack": false, 00:19:55.810 "enable_placement_id": 0, 00:19:55.810 "enable_zerocopy_send_server": true, 00:19:55.810 "enable_zerocopy_send_client": false, 00:19:55.810 "zerocopy_threshold": 0, 00:19:55.810 "tls_version": 0, 00:19:55.810 "enable_ktls": false 00:19:55.810 } 00:19:55.810 } 00:19:55.810 ] 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "subsystem": "vmd", 00:19:55.810 "config": [] 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "subsystem": "accel", 00:19:55.810 "config": [ 00:19:55.810 { 00:19:55.810 "method": "accel_set_options", 00:19:55.810 "params": { 00:19:55.810 "small_cache_size": 128, 00:19:55.810 "large_cache_size": 16, 00:19:55.810 "task_count": 2048, 00:19:55.810 "sequence_count": 2048, 00:19:55.810 "buf_count": 2048 00:19:55.810 } 00:19:55.810 } 00:19:55.810 ] 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "subsystem": "bdev", 00:19:55.810 "config": [ 00:19:55.810 { 00:19:55.810 "method": "bdev_set_options", 00:19:55.810 "params": { 00:19:55.810 "bdev_io_pool_size": 65535, 00:19:55.810 "bdev_io_cache_size": 256, 00:19:55.810 "bdev_auto_examine": true, 00:19:55.810 "iobuf_small_cache_size": 128, 00:19:55.810 "iobuf_large_cache_size": 16 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "bdev_raid_set_options", 00:19:55.810 "params": { 00:19:55.810 "process_window_size_kb": 1024, 00:19:55.810 "process_max_bandwidth_mb_sec": 0 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "bdev_iscsi_set_options", 00:19:55.810 "params": { 00:19:55.810 "timeout_sec": 30 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "bdev_nvme_set_options", 00:19:55.810 "params": { 00:19:55.810 "action_on_timeout": "none", 00:19:55.810 "timeout_us": 0, 00:19:55.810 "timeout_admin_us": 0, 00:19:55.810 "keep_alive_timeout_ms": 10000, 00:19:55.810 "arbitration_burst": 0, 00:19:55.810 "low_priority_weight": 0, 00:19:55.810 "medium_priority_weight": 0, 00:19:55.810 "high_priority_weight": 0, 00:19:55.810 "nvme_adminq_poll_period_us": 10000, 00:19:55.810 "nvme_ioq_poll_period_us": 0, 00:19:55.810 "io_queue_requests": 0, 00:19:55.810 "delay_cmd_submit": true, 00:19:55.810 "transport_retry_count": 4, 00:19:55.810 "bdev_retry_count": 3, 00:19:55.810 "transport_ack_timeout": 0, 00:19:55.810 "ctrlr_loss_timeout_sec": 0, 00:19:55.810 "reconnect_delay_sec": 0, 00:19:55.810 "fast_io_fail_timeout_sec": 0, 00:19:55.810 "disable_auto_failback": false, 00:19:55.810 "generate_uuids": false, 00:19:55.810 "transport_tos": 0, 00:19:55.810 "nvme_error_stat": false, 00:19:55.810 "rdma_srq_size": 0, 00:19:55.810 "io_path_stat": false, 00:19:55.810 "allow_accel_sequence": false, 00:19:55.810 "rdma_max_cq_size": 0, 00:19:55.810 "rdma_cm_event_timeout_ms": 0, 00:19:55.810 "dhchap_digests": [ 00:19:55.810 "sha256", 00:19:55.810 "sha384", 00:19:55.810 "sha512" 00:19:55.810 ], 00:19:55.810 "dhchap_dhgroups": [ 00:19:55.810 "null", 00:19:55.810 "ffdhe2048", 00:19:55.810 "ffdhe3072", 00:19:55.810 "ffdhe4096", 00:19:55.810 "ffdhe6144", 00:19:55.810 "ffdhe8192" 00:19:55.810 ] 00:19:55.810 } 00:19:55.810 }, 00:19:55.810 { 00:19:55.810 "method": "bdev_nvme_set_hotplug", 00:19:55.811 "params": { 00:19:55.811 "period_us": 100000, 00:19:55.811 "enable": false 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "bdev_malloc_create", 00:19:55.811 "params": { 00:19:55.811 "name": "malloc0", 00:19:55.811 "num_blocks": 8192, 00:19:55.811 "block_size": 4096, 00:19:55.811 "physical_block_size": 4096, 00:19:55.811 "uuid": "46d2d1df-55fe-4114-8e4f-4314ba244ca8", 00:19:55.811 "optimal_io_boundary": 0, 00:19:55.811 "md_size": 0, 00:19:55.811 "dif_type": 0, 00:19:55.811 "dif_is_head_of_md": false, 00:19:55.811 "dif_pi_format": 0 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "bdev_wait_for_examine" 00:19:55.811 } 00:19:55.811 ] 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "subsystem": "nbd", 00:19:55.811 "config": [] 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "subsystem": "scheduler", 00:19:55.811 "config": [ 00:19:55.811 { 00:19:55.811 "method": "framework_set_scheduler", 00:19:55.811 "params": { 00:19:55.811 "name": "static" 00:19:55.811 } 00:19:55.811 } 00:19:55.811 ] 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "subsystem": "nvmf", 00:19:55.811 "config": [ 00:19:55.811 { 00:19:55.811 "method": "nvmf_set_config", 00:19:55.811 "params": { 00:19:55.811 "discovery_filter": "match_any", 00:19:55.811 "admin_cmd_passthru": { 00:19:55.811 "identify_ctrlr": false 00:19:55.811 }, 00:19:55.811 "dhchap_digests": [ 00:19:55.811 "sha256", 00:19:55.811 "sha384", 00:19:55.811 "sha512" 00:19:55.811 ], 00:19:55.811 "dhchap_dhgroups": [ 00:19:55.811 "null", 00:19:55.811 "ffdhe2048", 00:19:55.811 "ffdhe3072", 00:19:55.811 "ffdhe4096", 00:19:55.811 "ffdhe6144", 00:19:55.811 "ffdhe8192" 00:19:55.811 ] 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_set_max_subsystems", 00:19:55.811 "params": { 00:19:55.811 "max_subsystems": 1024 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_set_crdt", 00:19:55.811 "params": { 00:19:55.811 "crdt1": 0, 00:19:55.811 "crdt2": 0, 00:19:55.811 "crdt3": 0 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_create_transport", 00:19:55.811 "params": { 00:19:55.811 "trtype": "TCP", 00:19:55.811 "max_queue_depth": 128, 00:19:55.811 "max_io_qpairs_per_ctrlr": 127, 00:19:55.811 "in_capsule_data_size": 4096, 00:19:55.811 "max_io_size": 131072, 00:19:55.811 "io_unit_size": 131072, 00:19:55.811 "max_aq_depth": 128, 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:55.811 "num_shared_buffers": 511, 00:19:55.811 "buf_cache_size": 4294967295, 00:19:55.811 "dif_insert_or_strip": false, 00:19:55.811 "zcopy": false, 00:19:55.811 "c2h_success": false, 00:19:55.811 "sock_priority": 0, 00:19:55.811 "abort_timeout_sec": 1, 00:19:55.811 "ack_timeout": 0, 00:19:55.811 "data_wr_pool_size": 0 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_create_subsystem", 00:19:55.811 "params": { 00:19:55.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.811 "allow_any_host": false, 00:19:55.811 "serial_number": "00000000000000000000", 00:19:55.811 "model_number": "SPDK bdev Controller", 00:19:55.811 "max_namespaces": 32, 00:19:55.811 "min_cntlid": 1, 00:19:55.811 "max_cntlid": 65519, 00:19:55.811 "ana_reporting": false 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_subsystem_add_host", 00:19:55.811 "params": { 00:19:55.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.811 "host": "nqn.2016-06.io.spdk:host1", 00:19:55.811 "psk": "key0" 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_subsystem_add_ns", 00:19:55.811 "params": { 00:19:55.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.811 "namespace": { 00:19:55.811 "nsid": 1, 00:19:55.811 "bdev_name": "malloc0", 00:19:55.811 "nguid": "46D2D1DF55FE41148E4F4314BA244CA8", 00:19:55.811 "uuid": "46d2d1df-55fe-4114-8e4f-4314ba244ca8", 00:19:55.811 "no_auto_visible": false 00:19:55.811 } 00:19:55.811 } 00:19:55.811 }, 00:19:55.811 { 00:19:55.811 "method": "nvmf_subsystem_add_listener", 00:19:55.811 "params": { 00:19:55.811 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.811 "listen_address": { 00:19:55.811 "trtype": "TCP", 00:19:55.811 "adrfam": "IPv4", 00:19:55.811 "traddr": "10.0.0.2", 00:19:55.811 "trsvcid": "4420" 00:19:55.811 }, 00:19:55.811 "secure_channel": false, 00:19:55.811 "sock_impl": "ssl" 00:19:55.811 } 00:19:55.811 } 00:19:55.811 ] 00:19:55.811 } 00:19:55.811 ] 00:19:55.811 }' 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=2126061 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 2126061 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2126061 ']' 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:55.811 19:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.811 [2024-10-17 19:27:19.429983] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:55.811 [2024-10-17 19:27:19.430032] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.811 [2024-10-17 19:27:19.511225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.811 [2024-10-17 19:27:19.548929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.811 [2024-10-17 19:27:19.548964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.811 [2024-10-17 19:27:19.548972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.811 [2024-10-17 19:27:19.548978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.811 [2024-10-17 19:27:19.548982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.811 [2024-10-17 19:27:19.549568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.071 [2024-10-17 19:27:19.762353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.071 [2024-10-17 19:27:19.794383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.071 [2024-10-17 19:27:19.794615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2126263 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2126263 /var/tmp/bdevperf.sock 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2126263 ']' 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.640 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:56.640 "subsystems": [ 00:19:56.640 { 00:19:56.640 "subsystem": "keyring", 00:19:56.640 "config": [ 00:19:56.640 { 00:19:56.640 "method": "keyring_file_add_key", 00:19:56.640 "params": { 00:19:56.640 "name": "key0", 00:19:56.640 "path": "/tmp/tmp.5XHsGmfKEg" 00:19:56.640 } 00:19:56.640 } 00:19:56.640 ] 00:19:56.640 }, 00:19:56.640 { 00:19:56.640 "subsystem": "iobuf", 00:19:56.640 "config": [ 00:19:56.640 { 00:19:56.640 "method": "iobuf_set_options", 00:19:56.640 "params": { 00:19:56.640 "small_pool_count": 8192, 00:19:56.640 "large_pool_count": 1024, 00:19:56.640 "small_bufsize": 8192, 00:19:56.640 "large_bufsize": 135168, 00:19:56.640 "enable_numa": false 00:19:56.640 } 00:19:56.640 } 00:19:56.640 ] 00:19:56.640 }, 00:19:56.640 { 00:19:56.640 "subsystem": "sock", 00:19:56.640 "config": [ 00:19:56.640 { 00:19:56.640 "method": "sock_set_default_impl", 00:19:56.640 "params": { 00:19:56.640 "impl_name": "posix" 00:19:56.640 } 00:19:56.640 }, 00:19:56.640 { 00:19:56.640 "method": "sock_impl_set_options", 00:19:56.640 "params": { 00:19:56.640 "impl_name": "ssl", 00:19:56.640 "recv_buf_size": 4096, 00:19:56.640 "send_buf_size": 4096, 00:19:56.640 "enable_recv_pipe": true, 00:19:56.640 "enable_quickack": false, 00:19:56.640 "enable_placement_id": 0, 00:19:56.640 "enable_zerocopy_send_server": true, 00:19:56.640 "enable_zerocopy_send_client": false, 00:19:56.640 "zerocopy_threshold": 0, 00:19:56.640 "tls_version": 0, 00:19:56.640 "enable_ktls": false 00:19:56.640 } 00:19:56.640 }, 00:19:56.640 { 00:19:56.640 "method": "sock_impl_set_options", 00:19:56.640 "params": { 00:19:56.640 "impl_name": "posix", 00:19:56.640 "recv_buf_size": 2097152, 00:19:56.640 "send_buf_size": 2097152, 00:19:56.640 "enable_recv_pipe": true, 00:19:56.640 "enable_quickack": false, 00:19:56.640 "enable_placement_id": 0, 00:19:56.640 "enable_zerocopy_send_server": true, 00:19:56.640 "enable_zerocopy_send_client": false, 00:19:56.640 "zerocopy_threshold": 0, 00:19:56.640 "tls_version": 0, 00:19:56.640 "enable_ktls": false 00:19:56.640 } 00:19:56.640 } 00:19:56.641 ] 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "subsystem": "vmd", 00:19:56.641 "config": [] 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "subsystem": "accel", 00:19:56.641 "config": [ 00:19:56.641 { 00:19:56.641 "method": "accel_set_options", 00:19:56.641 "params": { 00:19:56.641 "small_cache_size": 128, 00:19:56.641 "large_cache_size": 16, 00:19:56.641 "task_count": 2048, 00:19:56.641 "sequence_count": 2048, 00:19:56.641 "buf_count": 2048 00:19:56.641 } 00:19:56.641 } 00:19:56.641 ] 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "subsystem": "bdev", 00:19:56.641 "config": [ 00:19:56.641 { 00:19:56.641 "method": "bdev_set_options", 00:19:56.641 "params": { 00:19:56.641 "bdev_io_pool_size": 65535, 00:19:56.641 "bdev_io_cache_size": 256, 00:19:56.641 "bdev_auto_examine": true, 00:19:56.641 "iobuf_small_cache_size": 128, 00:19:56.641 "iobuf_large_cache_size": 16 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_raid_set_options", 00:19:56.641 "params": { 00:19:56.641 "process_window_size_kb": 1024, 00:19:56.641 "process_max_bandwidth_mb_sec": 0 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_iscsi_set_options", 00:19:56.641 "params": { 00:19:56.641 "timeout_sec": 30 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_nvme_set_options", 00:19:56.641 "params": { 00:19:56.641 "action_on_timeout": "none", 00:19:56.641 "timeout_us": 0, 00:19:56.641 "timeout_admin_us": 0, 00:19:56.641 "keep_alive_timeout_ms": 10000, 00:19:56.641 "arbitration_burst": 0, 00:19:56.641 "low_priority_weight": 0, 00:19:56.641 "medium_priority_weight": 0, 00:19:56.641 "high_priority_weight": 0, 00:19:56.641 "nvme_adminq_poll_period_us": 10000, 00:19:56.641 "nvme_ioq_poll_period_us": 0, 00:19:56.641 "io_queue_requests": 512, 00:19:56.641 "delay_cmd_submit": true, 00:19:56.641 "transport_retry_count": 4, 00:19:56.641 "bdev_retry_count": 3, 00:19:56.641 "transport_ack_timeout": 0, 00:19:56.641 "ctrlr_loss_timeout_sec": 0, 00:19:56.641 "reconnect_delay_sec": 0, 00:19:56.641 "fast_io_fail_timeout_sec": 0, 00:19:56.641 "disable_auto_failback": false, 00:19:56.641 "generate_uuids": false, 00:19:56.641 "transport_tos": 0, 00:19:56.641 "nvme_error_stat": false, 00:19:56.641 "rdma_srq_size": 0, 00:19:56.641 "io_path_stat": false, 00:19:56.641 "allow_accel_sequence": false, 00:19:56.641 "rdma_max_cq_size": 0, 00:19:56.641 "rdma_cm_event_timeout_ms": 0 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:56.641 , 00:19:56.641 "dhchap_digests": [ 00:19:56.641 "sha256", 00:19:56.641 "sha384", 00:19:56.641 "sha512" 00:19:56.641 ], 00:19:56.641 "dhchap_dhgroups": [ 00:19:56.641 "null", 00:19:56.641 "ffdhe2048", 00:19:56.641 "ffdhe3072", 00:19:56.641 "ffdhe4096", 00:19:56.641 "ffdhe6144", 00:19:56.641 "ffdhe8192" 00:19:56.641 ] 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_nvme_attach_controller", 00:19:56.641 "params": { 00:19:56.641 "name": "nvme0", 00:19:56.641 "trtype": "TCP", 00:19:56.641 "adrfam": "IPv4", 00:19:56.641 "traddr": "10.0.0.2", 00:19:56.641 "trsvcid": "4420", 00:19:56.641 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.641 "prchk_reftag": false, 00:19:56.641 "prchk_guard": false, 00:19:56.641 "ctrlr_loss_timeout_sec": 0, 00:19:56.641 "reconnect_delay_sec": 0, 00:19:56.641 "fast_io_fail_timeout_sec": 0, 00:19:56.641 "psk": "key0", 00:19:56.641 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.641 "hdgst": false, 00:19:56.641 "ddgst": false, 00:19:56.641 "multipath": "multipath" 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_nvme_set_hotplug", 00:19:56.641 "params": { 00:19:56.641 "period_us": 100000, 00:19:56.641 "enable": false 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_enable_histogram", 00:19:56.641 "params": { 00:19:56.641 "name": "nvme0n1", 00:19:56.641 "enable": true 00:19:56.641 } 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "method": "bdev_wait_for_examine" 00:19:56.641 } 00:19:56.641 ] 00:19:56.641 }, 00:19:56.641 { 00:19:56.641 "subsystem": "nbd", 00:19:56.641 "config": [] 00:19:56.641 } 00:19:56.641 ] 00:19:56.641 }' 00:19:56.641 19:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.641 [2024-10-17 19:27:20.358024] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:19:56.641 [2024-10-17 19:27:20.358073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126263 ] 00:19:56.900 [2024-10-17 19:27:20.433685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.900 [2024-10-17 19:27:20.475000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.900 [2024-10-17 19:27:20.627488] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.468 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.468 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:57.468 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:57.468 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:57.727 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.728 19:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.728 Running I/O for 1 seconds... 00:19:59.107 5036.00 IOPS, 19.67 MiB/s 00:19:59.107 Latency(us) 00:19:59.107 [2024-10-17T17:27:22.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.107 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:59.107 Verification LBA range: start 0x0 length 0x2000 00:19:59.107 nvme0n1 : 1.01 5095.45 19.90 0.00 0.00 24944.25 4649.94 31706.94 00:19:59.107 [2024-10-17T17:27:22.891Z] =================================================================================================================== 00:19:59.107 [2024-10-17T17:27:22.891Z] Total : 5095.45 19.90 0.00 0.00 24944.25 4649.94 31706.94 00:19:59.107 { 00:19:59.107 "results": [ 00:19:59.107 { 00:19:59.107 "job": "nvme0n1", 00:19:59.107 "core_mask": "0x2", 00:19:59.107 "workload": "verify", 00:19:59.107 "status": "finished", 00:19:59.107 "verify_range": { 00:19:59.107 "start": 0, 00:19:59.107 "length": 8192 00:19:59.107 }, 00:19:59.107 "queue_depth": 128, 00:19:59.107 "io_size": 4096, 00:19:59.107 "runtime": 1.013453, 00:19:59.107 "iops": 5095.450899055013, 00:19:59.107 "mibps": 19.904105074433645, 00:19:59.107 "io_failed": 0, 00:19:59.107 "io_timeout": 0, 00:19:59.107 "avg_latency_us": 24944.25473940467, 00:19:59.107 "min_latency_us": 4649.935238095238, 00:19:59.107 "max_latency_us": 31706.94095238095 00:19:59.107 } 00:19:59.107 ], 00:19:59.107 "core_count": 1 00:19:59.107 } 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:59.107 nvmf_trace.0 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2126263 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2126263 ']' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2126263 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2126263 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2126263' 00:19:59.107 killing process with pid 2126263 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2126263 00:19:59.107 Received shutdown signal, test time was about 1.000000 seconds 00:19:59.107 00:19:59.107 Latency(us) 00:19:59.107 [2024-10-17T17:27:22.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.107 [2024-10-17T17:27:22.891Z] =================================================================================================================== 00:19:59.107 [2024-10-17T17:27:22.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2126263 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:59.107 rmmod nvme_tcp 00:19:59.107 rmmod nvme_fabrics 00:19:59.107 rmmod nvme_keyring 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:59.107 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 2126061 ']' 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 2126061 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2126061 ']' 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2126061 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2126061 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2126061' 00:19:59.367 killing process with pid 2126061 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2126061 00:19:59.367 19:27:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2126061 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.367 19:27:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xPMr8qNdlV /tmp/tmp.yvslCCdi6P /tmp/tmp.5XHsGmfKEg 00:20:01.909 00:20:01.909 real 1m20.611s 00:20:01.909 user 2m2.107s 00:20:01.909 sys 0m31.387s 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:01.909 ************************************ 00:20:01.909 END TEST nvmf_tls 00:20:01.909 ************************************ 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.909 ************************************ 00:20:01.909 START TEST nvmf_fips 00:20:01.909 ************************************ 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:01.909 * Looking for test storage... 00:20:01.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:01.909 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:01.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.910 --rc genhtml_branch_coverage=1 00:20:01.910 --rc genhtml_function_coverage=1 00:20:01.910 --rc genhtml_legend=1 00:20:01.910 --rc geninfo_all_blocks=1 00:20:01.910 --rc geninfo_unexecuted_blocks=1 00:20:01.910 00:20:01.910 ' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:01.910 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:01.911 Error setting digest 00:20:01.911 4082EE73527F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:01.911 4082EE73527F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.911 19:27:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.491 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.491 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.491 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:20:08.492 00:20:08.492 --- 10.0.0.2 ping statistics --- 00:20:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.492 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:20:08.492 00:20:08.492 --- 10.0.0.1 ping statistics --- 00:20:08.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.492 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=2130285 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 2130285 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2130285 ']' 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.492 19:27:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.492 [2024-10-17 19:27:31.674518] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:20:08.492 [2024-10-17 19:27:31.674566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.492 [2024-10-17 19:27:31.751122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.492 [2024-10-17 19:27:31.793116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.492 [2024-10-17 19:27:31.793151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.492 [2024-10-17 19:27:31.793158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.492 [2024-10-17 19:27:31.793164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.492 [2024-10-17 19:27:31.793169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.492 [2024-10-17 19:27:31.793728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.M2t 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.M2t 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.M2t 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.M2t 00:20:08.752 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:09.011 [2024-10-17 19:27:32.702169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.011 [2024-10-17 19:27:32.718175] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.011 [2024-10-17 19:27:32.718356] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.011 malloc0 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2130535 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2130535 /var/tmp/bdevperf.sock 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2130535 ']' 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:09.011 19:27:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:09.270 [2024-10-17 19:27:32.829620] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:20:09.270 [2024-10-17 19:27:32.829665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130535 ] 00:20:09.270 [2024-10-17 19:27:32.904490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.271 [2024-10-17 19:27:32.945984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.271 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:09.271 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:09.271 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.M2t 00:20:09.529 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.788 [2024-10-17 19:27:33.399493] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.788 TLSTESTn1 00:20:09.788 19:27:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.788 Running I/O for 10 seconds... 00:20:12.143 5416.00 IOPS, 21.16 MiB/s [2024-10-17T17:27:36.939Z] 5522.00 IOPS, 21.57 MiB/s [2024-10-17T17:27:37.875Z] 5566.67 IOPS, 21.74 MiB/s [2024-10-17T17:27:38.812Z] 5540.25 IOPS, 21.64 MiB/s [2024-10-17T17:27:39.748Z] 5558.60 IOPS, 21.71 MiB/s [2024-10-17T17:27:40.685Z] 5549.17 IOPS, 21.68 MiB/s [2024-10-17T17:27:41.621Z] 5555.57 IOPS, 21.70 MiB/s [2024-10-17T17:27:42.999Z] 5564.25 IOPS, 21.74 MiB/s [2024-10-17T17:27:43.935Z] 5573.11 IOPS, 21.77 MiB/s [2024-10-17T17:27:43.935Z] 5568.80 IOPS, 21.75 MiB/s 00:20:20.151 Latency(us) 00:20:20.151 [2024-10-17T17:27:43.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.151 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.151 Verification LBA range: start 0x0 length 0x2000 00:20:20.151 TLSTESTn1 : 10.02 5572.25 21.77 0.00 0.00 22934.00 7052.92 22344.66 00:20:20.151 [2024-10-17T17:27:43.935Z] =================================================================================================================== 00:20:20.151 [2024-10-17T17:27:43.935Z] Total : 5572.25 21.77 0.00 0.00 22934.00 7052.92 22344.66 00:20:20.151 { 00:20:20.151 "results": [ 00:20:20.151 { 00:20:20.151 "job": "TLSTESTn1", 00:20:20.151 "core_mask": "0x4", 00:20:20.151 "workload": "verify", 00:20:20.151 "status": "finished", 00:20:20.151 "verify_range": { 00:20:20.151 "start": 0, 00:20:20.151 "length": 8192 00:20:20.151 }, 00:20:20.151 "queue_depth": 128, 00:20:20.151 "io_size": 4096, 00:20:20.151 "runtime": 10.016602, 00:20:20.151 "iops": 5572.248952289409, 00:20:20.151 "mibps": 21.766597469880505, 00:20:20.151 "io_failed": 0, 00:20:20.151 "io_timeout": 0, 00:20:20.151 "avg_latency_us": 22933.99894884034, 00:20:20.151 "min_latency_us": 7052.921904761904, 00:20:20.151 "max_latency_us": 22344.655238095238 00:20:20.151 } 00:20:20.151 ], 00:20:20.151 "core_count": 1 00:20:20.151 } 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:20.151 nvmf_trace.0 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2130535 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2130535 ']' 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2130535 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2130535 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2130535' 00:20:20.151 killing process with pid 2130535 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2130535 00:20:20.151 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.151 00:20:20.151 Latency(us) 00:20:20.151 [2024-10-17T17:27:43.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.151 [2024-10-17T17:27:43.935Z] =================================================================================================================== 00:20:20.151 [2024-10-17T17:27:43.935Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2130535 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:20.151 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.410 rmmod nvme_tcp 00:20:20.410 rmmod nvme_fabrics 00:20:20.410 rmmod nvme_keyring 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 2130285 ']' 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 2130285 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2130285 ']' 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2130285 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.410 19:27:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2130285 00:20:20.410 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:20.410 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:20.410 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2130285' 00:20:20.410 killing process with pid 2130285 00:20:20.410 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2130285 00:20:20.410 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2130285 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.669 19:27:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.M2t 00:20:22.576 00:20:22.576 real 0m21.013s 00:20:22.576 user 0m21.835s 00:20:22.576 sys 0m9.737s 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:22.576 ************************************ 00:20:22.576 END TEST nvmf_fips 00:20:22.576 ************************************ 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.576 ************************************ 00:20:22.576 START TEST nvmf_control_msg_list 00:20:22.576 ************************************ 00:20:22.576 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:22.836 * Looking for test storage... 00:20:22.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.837 --rc genhtml_branch_coverage=1 00:20:22.837 --rc genhtml_function_coverage=1 00:20:22.837 --rc genhtml_legend=1 00:20:22.837 --rc geninfo_all_blocks=1 00:20:22.837 --rc geninfo_unexecuted_blocks=1 00:20:22.837 00:20:22.837 ' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.837 --rc genhtml_branch_coverage=1 00:20:22.837 --rc genhtml_function_coverage=1 00:20:22.837 --rc genhtml_legend=1 00:20:22.837 --rc geninfo_all_blocks=1 00:20:22.837 --rc geninfo_unexecuted_blocks=1 00:20:22.837 00:20:22.837 ' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.837 --rc genhtml_branch_coverage=1 00:20:22.837 --rc genhtml_function_coverage=1 00:20:22.837 --rc genhtml_legend=1 00:20:22.837 --rc geninfo_all_blocks=1 00:20:22.837 --rc geninfo_unexecuted_blocks=1 00:20:22.837 00:20:22.837 ' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:22.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.837 --rc genhtml_branch_coverage=1 00:20:22.837 --rc genhtml_function_coverage=1 00:20:22.837 --rc genhtml_legend=1 00:20:22.837 --rc geninfo_all_blocks=1 00:20:22.837 --rc geninfo_unexecuted_blocks=1 00:20:22.837 00:20:22.837 ' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:22.837 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:22.838 19:27:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:29.414 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.415 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.415 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:29.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:20:29.415 00:20:29.415 --- 10.0.0.2 ping statistics --- 00:20:29.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.415 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:20:29.415 00:20:29.415 --- 10.0.0.1 ping statistics --- 00:20:29.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.415 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=2135876 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 2135876 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 2135876 ']' 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.415 [2024-10-17 19:27:52.582203] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:20:29.415 [2024-10-17 19:27:52.582257] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.415 [2024-10-17 19:27:52.661574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.415 [2024-10-17 19:27:52.700599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.415 [2024-10-17 19:27:52.700643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.415 [2024-10-17 19:27:52.700653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.415 [2024-10-17 19:27:52.700659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.415 [2024-10-17 19:27:52.700665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.415 [2024-10-17 19:27:52.701219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.415 [2024-10-17 19:27:52.843374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:29.415 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.416 Malloc0 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:29.416 [2024-10-17 19:27:52.883557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2135924 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2135925 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2135926 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:29.416 19:27:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2135924 00:20:29.416 [2024-10-17 19:27:52.961992] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:29.416 [2024-10-17 19:27:52.972022] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:29.416 [2024-10-17 19:27:52.972199] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:30.354 Initializing NVMe Controllers 00:20:30.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:30.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:30.354 Initialization complete. Launching workers. 00:20:30.354 ======================================================== 00:20:30.354 Latency(us) 00:20:30.354 Device Information : IOPS MiB/s Average min max 00:20:30.354 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40886.87 40593.00 41016.80 00:20:30.354 ======================================================== 00:20:30.354 Total : 25.00 0.10 40886.87 40593.00 41016.80 00:20:30.354 00:20:30.354 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2135925 00:20:30.354 Initializing NVMe Controllers 00:20:30.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:30.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:30.354 Initialization complete. Launching workers. 00:20:30.354 ======================================================== 00:20:30.354 Latency(us) 00:20:30.354 Device Information : IOPS MiB/s Average min max 00:20:30.354 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 76.00 0.30 13623.03 236.68 40980.29 00:20:30.354 ======================================================== 00:20:30.354 Total : 76.00 0.30 13623.03 236.68 40980.29 00:20:30.354 00:20:30.354 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2135926 00:20:30.614 Initializing NVMe Controllers 00:20:30.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:30.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:30.614 Initialization complete. Launching workers. 00:20:30.614 ======================================================== 00:20:30.614 Latency(us) 00:20:30.614 Device Information : IOPS MiB/s Average min max 00:20:30.614 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40878.48 40461.33 41001.27 00:20:30.614 ======================================================== 00:20:30.614 Total : 25.00 0.10 40878.48 40461.33 41001.27 00:20:30.614 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:30.614 rmmod nvme_tcp 00:20:30.614 rmmod nvme_fabrics 00:20:30.614 rmmod nvme_keyring 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 2135876 ']' 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 2135876 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 2135876 ']' 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 2135876 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2135876 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2135876' 00:20:30.614 killing process with pid 2135876 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 2135876 00:20:30.614 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 2135876 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.873 19:27:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:33.412 00:20:33.412 real 0m10.241s 00:20:33.412 user 0m7.075s 00:20:33.412 sys 0m5.256s 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:33.412 ************************************ 00:20:33.412 END TEST nvmf_control_msg_list 00:20:33.412 ************************************ 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.412 19:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.413 ************************************ 00:20:33.413 START TEST nvmf_wait_for_buf 00:20:33.413 ************************************ 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:33.413 * Looking for test storage... 00:20:33.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.413 --rc genhtml_branch_coverage=1 00:20:33.413 --rc genhtml_function_coverage=1 00:20:33.413 --rc genhtml_legend=1 00:20:33.413 --rc geninfo_all_blocks=1 00:20:33.413 --rc geninfo_unexecuted_blocks=1 00:20:33.413 00:20:33.413 ' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.413 --rc genhtml_branch_coverage=1 00:20:33.413 --rc genhtml_function_coverage=1 00:20:33.413 --rc genhtml_legend=1 00:20:33.413 --rc geninfo_all_blocks=1 00:20:33.413 --rc geninfo_unexecuted_blocks=1 00:20:33.413 00:20:33.413 ' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.413 --rc genhtml_branch_coverage=1 00:20:33.413 --rc genhtml_function_coverage=1 00:20:33.413 --rc genhtml_legend=1 00:20:33.413 --rc geninfo_all_blocks=1 00:20:33.413 --rc geninfo_unexecuted_blocks=1 00:20:33.413 00:20:33.413 ' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:33.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.413 --rc genhtml_branch_coverage=1 00:20:33.413 --rc genhtml_function_coverage=1 00:20:33.413 --rc genhtml_legend=1 00:20:33.413 --rc geninfo_all_blocks=1 00:20:33.413 --rc geninfo_unexecuted_blocks=1 00:20:33.413 00:20:33.413 ' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:33.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:33.413 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:33.414 19:27:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:39.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:39.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:39.987 Found net devices under 0000:86:00.0: cvl_0_0 00:20:39.987 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:39.988 Found net devices under 0000:86:00.1: cvl_0_1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:39.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.477 ms 00:20:39.988 00:20:39.988 --- 10.0.0.2 ping statistics --- 00:20:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.988 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:39.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:20:39.988 00:20:39.988 --- 10.0.0.1 ping statistics --- 00:20:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.988 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=2139684 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 2139684 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 2139684 ']' 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.988 19:28:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 [2024-10-17 19:28:02.869911] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:20:39.988 [2024-10-17 19:28:02.869959] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.988 [2024-10-17 19:28:02.948239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.988 [2024-10-17 19:28:02.987134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.988 [2024-10-17 19:28:02.987171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.988 [2024-10-17 19:28:02.987178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.988 [2024-10-17 19:28:02.987184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.988 [2024-10-17 19:28:02.987188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.988 [2024-10-17 19:28:02.987772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 Malloc0 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.988 [2024-10-17 19:28:03.168469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:39.988 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.989 [2024-10-17 19:28:03.192675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.989 19:28:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.989 [2024-10-17 19:28:03.277680] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:40.926 Initializing NVMe Controllers 00:20:40.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:40.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:40.927 Initialization complete. Launching workers. 00:20:40.927 ======================================================== 00:20:40.927 Latency(us) 00:20:40.927 Device Information : IOPS MiB/s Average min max 00:20:40.927 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 29.00 3.62 147254.29 7276.46 191532.28 00:20:40.927 ======================================================== 00:20:40.927 Total : 29.00 3.62 147254.29 7276.46 191532.28 00:20:40.927 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]] 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.186 rmmod nvme_tcp 00:20:41.186 rmmod nvme_fabrics 00:20:41.186 rmmod nvme_keyring 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 2139684 ']' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 2139684 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 2139684 ']' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 2139684 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2139684 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2139684' 00:20:41.186 killing process with pid 2139684 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 2139684 00:20:41.186 19:28:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 2139684 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.446 19:28:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.374 00:20:43.374 real 0m10.414s 00:20:43.374 user 0m4.002s 00:20:43.374 sys 0m4.843s 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:43.374 ************************************ 00:20:43.374 END TEST nvmf_wait_for_buf 00:20:43.374 ************************************ 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.374 19:28:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:49.955 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:49.955 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:49.955 Found net devices under 0000:86:00.0: cvl_0_0 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:49.955 Found net devices under 0000:86:00.1: cvl_0_1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:49.955 ************************************ 00:20:49.955 START TEST nvmf_perf_adq 00:20:49.955 ************************************ 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:49.955 * Looking for test storage... 00:20:49.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:49.955 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:49.956 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:49.956 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:49.956 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:49.956 19:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.956 --rc genhtml_branch_coverage=1 00:20:49.956 --rc genhtml_function_coverage=1 00:20:49.956 --rc genhtml_legend=1 00:20:49.956 --rc geninfo_all_blocks=1 00:20:49.956 --rc geninfo_unexecuted_blocks=1 00:20:49.956 00:20:49.956 ' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.956 --rc genhtml_branch_coverage=1 00:20:49.956 --rc genhtml_function_coverage=1 00:20:49.956 --rc genhtml_legend=1 00:20:49.956 --rc geninfo_all_blocks=1 00:20:49.956 --rc geninfo_unexecuted_blocks=1 00:20:49.956 00:20:49.956 ' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.956 --rc genhtml_branch_coverage=1 00:20:49.956 --rc genhtml_function_coverage=1 00:20:49.956 --rc genhtml_legend=1 00:20:49.956 --rc geninfo_all_blocks=1 00:20:49.956 --rc geninfo_unexecuted_blocks=1 00:20:49.956 00:20:49.956 ' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:49.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:49.956 --rc genhtml_branch_coverage=1 00:20:49.956 --rc genhtml_function_coverage=1 00:20:49.956 --rc genhtml_legend=1 00:20:49.956 --rc geninfo_all_blocks=1 00:20:49.956 --rc geninfo_unexecuted_blocks=1 00:20:49.956 00:20:49.956 ' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:49.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:49.956 19:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.235 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.235 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.235 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.235 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:55.235 19:28:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:56.173 19:28:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:58.710 19:28:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:04.076 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:04.076 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.076 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:04.076 Found net devices under 0000:86:00.0: cvl_0_0 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:04.077 Found net devices under 0000:86:00.1: cvl_0_1 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.077 19:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:04.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:21:04.077 00:21:04.077 --- 10.0.0.2 ping statistics --- 00:21:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.077 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:21:04.077 00:21:04.077 --- 10.0.0.1 ping statistics --- 00:21:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.077 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2148023 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2148023 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2148023 ']' 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 [2024-10-17 19:28:27.261915] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:04.077 [2024-10-17 19:28:27.261954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.077 [2024-10-17 19:28:27.322715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.077 [2024-10-17 19:28:27.363173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.077 [2024-10-17 19:28:27.363213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.077 [2024-10-17 19:28:27.363220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.077 [2024-10-17 19:28:27.363226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.077 [2024-10-17 19:28:27.363230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.077 [2024-10-17 19:28:27.364684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.077 [2024-10-17 19:28:27.364793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.077 [2024-10-17 19:28:27.364897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.077 [2024-10-17 19:28:27.364899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 [2024-10-17 19:28:27.610041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 Malloc1 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.077 [2024-10-17 19:28:27.671842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2148052 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:04.077 19:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:05.984 "tick_rate": 2100000000, 00:21:05.984 "poll_groups": [ 00:21:05.984 { 00:21:05.984 "name": "nvmf_tgt_poll_group_000", 00:21:05.984 "admin_qpairs": 1, 00:21:05.984 "io_qpairs": 1, 00:21:05.984 "current_admin_qpairs": 1, 00:21:05.984 "current_io_qpairs": 1, 00:21:05.984 "pending_bdev_io": 0, 00:21:05.984 "completed_nvme_io": 19932, 00:21:05.984 "transports": [ 00:21:05.984 { 00:21:05.984 "trtype": "TCP" 00:21:05.984 } 00:21:05.984 ] 00:21:05.984 }, 00:21:05.984 { 00:21:05.984 "name": "nvmf_tgt_poll_group_001", 00:21:05.984 "admin_qpairs": 0, 00:21:05.984 "io_qpairs": 1, 00:21:05.984 "current_admin_qpairs": 0, 00:21:05.984 "current_io_qpairs": 1, 00:21:05.984 "pending_bdev_io": 0, 00:21:05.984 "completed_nvme_io": 20115, 00:21:05.984 "transports": [ 00:21:05.984 { 00:21:05.984 "trtype": "TCP" 00:21:05.984 } 00:21:05.984 ] 00:21:05.984 }, 00:21:05.984 { 00:21:05.984 "name": "nvmf_tgt_poll_group_002", 00:21:05.984 "admin_qpairs": 0, 00:21:05.984 "io_qpairs": 1, 00:21:05.984 "current_admin_qpairs": 0, 00:21:05.984 "current_io_qpairs": 1, 00:21:05.984 "pending_bdev_io": 0, 00:21:05.984 "completed_nvme_io": 20155, 00:21:05.984 "transports": [ 00:21:05.984 { 00:21:05.984 "trtype": "TCP" 00:21:05.984 } 00:21:05.984 ] 00:21:05.984 }, 00:21:05.984 { 00:21:05.984 "name": "nvmf_tgt_poll_group_003", 00:21:05.984 "admin_qpairs": 0, 00:21:05.984 "io_qpairs": 1, 00:21:05.984 "current_admin_qpairs": 0, 00:21:05.984 "current_io_qpairs": 1, 00:21:05.984 "pending_bdev_io": 0, 00:21:05.984 "completed_nvme_io": 20056, 00:21:05.984 "transports": [ 00:21:05.984 { 00:21:05.984 "trtype": "TCP" 00:21:05.984 } 00:21:05.984 ] 00:21:05.984 } 00:21:05.984 ] 00:21:05.984 }' 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:05.984 19:28:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2148052 00:21:14.103 Initializing NVMe Controllers 00:21:14.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:14.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:14.103 Initialization complete. Launching workers. 00:21:14.103 ======================================================== 00:21:14.103 Latency(us) 00:21:14.103 Device Information : IOPS MiB/s Average min max 00:21:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10587.00 41.36 6046.19 1679.31 10184.10 00:21:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10815.80 42.25 5918.61 2077.89 12946.69 00:21:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10705.20 41.82 5980.10 2286.21 9955.01 00:21:14.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10675.50 41.70 5995.61 2408.29 10035.90 00:21:14.103 ======================================================== 00:21:14.103 Total : 42783.49 167.12 5984.78 1679.31 12946.69 00:21:14.103 00:21:14.103 [2024-10-17 19:28:37.829729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd10620 is same with the state(6) to be set 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.103 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.103 rmmod nvme_tcp 00:21:14.103 rmmod nvme_fabrics 00:21:14.103 rmmod nvme_keyring 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2148023 ']' 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2148023 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2148023 ']' 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2148023 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2148023 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2148023' 00:21:14.362 killing process with pid 2148023 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2148023 00:21:14.362 19:28:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2148023 00:21:14.362 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:14.362 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:14.362 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:14.362 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:14.362 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:14.363 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:14.363 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:14.363 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.363 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.363 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.621 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.621 19:28:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.526 19:28:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:16.526 19:28:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:16.526 19:28:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:16.526 19:28:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:17.911 19:28:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:19.819 19:28:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.103 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.103 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.104 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.104 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.104 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.527 ms 00:21:25.104 00:21:25.104 --- 10.0.0.2 ping statistics --- 00:21:25.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.104 rtt min/avg/max/mdev = 0.527/0.527/0.527/0.000 ms 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:25.104 00:21:25.104 --- 10.0.0.1 ping statistics --- 00:21:25.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.104 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:25.104 net.core.busy_poll = 1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:25.104 net.core.busy_read = 1 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:25.104 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:25.364 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:25.364 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:25.364 19:28:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=2151831 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 2151831 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2151831 ']' 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:25.364 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.364 [2024-10-17 19:28:49.093272] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:25.364 [2024-10-17 19:28:49.093325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.623 [2024-10-17 19:28:49.174304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.623 [2024-10-17 19:28:49.216465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.623 [2024-10-17 19:28:49.216502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.623 [2024-10-17 19:28:49.216509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.623 [2024-10-17 19:28:49.216515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.623 [2024-10-17 19:28:49.216520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.623 [2024-10-17 19:28:49.217917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.623 [2024-10-17 19:28:49.218024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.623 [2024-10-17 19:28:49.218135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.623 [2024-10-17 19:28:49.218136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.191 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:26.450 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:26.450 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.450 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:26.450 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 19:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 [2024-10-17 19:28:50.113770] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.450 Malloc1 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.450 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.451 [2024-10-17 19:28:50.180149] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2152087 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:26.451 19:28:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:28.987 "tick_rate": 2100000000, 00:21:28.987 "poll_groups": [ 00:21:28.987 { 00:21:28.987 "name": "nvmf_tgt_poll_group_000", 00:21:28.987 "admin_qpairs": 1, 00:21:28.987 "io_qpairs": 3, 00:21:28.987 "current_admin_qpairs": 1, 00:21:28.987 "current_io_qpairs": 3, 00:21:28.987 "pending_bdev_io": 0, 00:21:28.987 "completed_nvme_io": 29423, 00:21:28.987 "transports": [ 00:21:28.987 { 00:21:28.987 "trtype": "TCP" 00:21:28.987 } 00:21:28.987 ] 00:21:28.987 }, 00:21:28.987 { 00:21:28.987 "name": "nvmf_tgt_poll_group_001", 00:21:28.987 "admin_qpairs": 0, 00:21:28.987 "io_qpairs": 1, 00:21:28.987 "current_admin_qpairs": 0, 00:21:28.987 "current_io_qpairs": 1, 00:21:28.987 "pending_bdev_io": 0, 00:21:28.987 "completed_nvme_io": 29188, 00:21:28.987 "transports": [ 00:21:28.987 { 00:21:28.987 "trtype": "TCP" 00:21:28.987 } 00:21:28.987 ] 00:21:28.987 }, 00:21:28.987 { 00:21:28.987 "name": "nvmf_tgt_poll_group_002", 00:21:28.987 "admin_qpairs": 0, 00:21:28.987 "io_qpairs": 0, 00:21:28.987 "current_admin_qpairs": 0, 00:21:28.987 "current_io_qpairs": 0, 00:21:28.987 "pending_bdev_io": 0, 00:21:28.987 "completed_nvme_io": 0, 00:21:28.987 "transports": [ 00:21:28.987 { 00:21:28.987 "trtype": "TCP" 00:21:28.987 } 00:21:28.987 ] 00:21:28.987 }, 00:21:28.987 { 00:21:28.987 "name": "nvmf_tgt_poll_group_003", 00:21:28.987 "admin_qpairs": 0, 00:21:28.987 "io_qpairs": 0, 00:21:28.987 "current_admin_qpairs": 0, 00:21:28.987 "current_io_qpairs": 0, 00:21:28.987 "pending_bdev_io": 0, 00:21:28.987 "completed_nvme_io": 0, 00:21:28.987 "transports": [ 00:21:28.987 { 00:21:28.987 "trtype": "TCP" 00:21:28.987 } 00:21:28.987 ] 00:21:28.987 } 00:21:28.987 ] 00:21:28.987 }' 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:28.987 19:28:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2152087 00:21:37.108 Initializing NVMe Controllers 00:21:37.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:37.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:37.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:37.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:37.109 Initialization complete. Launching workers. 00:21:37.109 ======================================================== 00:21:37.109 Latency(us) 00:21:37.109 Device Information : IOPS MiB/s Average min max 00:21:37.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 15582.60 60.87 4107.06 1550.68 6356.88 00:21:37.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5448.90 21.28 11747.10 1122.32 59543.93 00:21:37.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5189.60 20.27 12341.93 1321.60 58934.82 00:21:37.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4634.90 18.11 13865.80 1595.80 58603.94 00:21:37.109 ======================================================== 00:21:37.109 Total : 30855.99 120.53 8307.09 1122.32 59543.93 00:21:37.109 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:37.109 rmmod nvme_tcp 00:21:37.109 rmmod nvme_fabrics 00:21:37.109 rmmod nvme_keyring 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 2151831 ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2151831 ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2151831' 00:21:37.109 killing process with pid 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2151831 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.109 19:29:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:39.016 00:21:39.016 real 0m49.897s 00:21:39.016 user 2m46.893s 00:21:39.016 sys 0m10.335s 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.016 ************************************ 00:21:39.016 END TEST nvmf_perf_adq 00:21:39.016 ************************************ 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.016 ************************************ 00:21:39.016 START TEST nvmf_shutdown 00:21:39.016 ************************************ 00:21:39.016 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:39.276 * Looking for test storage... 00:21:39.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.276 --rc genhtml_branch_coverage=1 00:21:39.276 --rc genhtml_function_coverage=1 00:21:39.276 --rc genhtml_legend=1 00:21:39.276 --rc geninfo_all_blocks=1 00:21:39.276 --rc geninfo_unexecuted_blocks=1 00:21:39.276 00:21:39.276 ' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.276 --rc genhtml_branch_coverage=1 00:21:39.276 --rc genhtml_function_coverage=1 00:21:39.276 --rc genhtml_legend=1 00:21:39.276 --rc geninfo_all_blocks=1 00:21:39.276 --rc geninfo_unexecuted_blocks=1 00:21:39.276 00:21:39.276 ' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.276 --rc genhtml_branch_coverage=1 00:21:39.276 --rc genhtml_function_coverage=1 00:21:39.276 --rc genhtml_legend=1 00:21:39.276 --rc geninfo_all_blocks=1 00:21:39.276 --rc geninfo_unexecuted_blocks=1 00:21:39.276 00:21:39.276 ' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:39.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.276 --rc genhtml_branch_coverage=1 00:21:39.276 --rc genhtml_function_coverage=1 00:21:39.276 --rc genhtml_legend=1 00:21:39.276 --rc geninfo_all_blocks=1 00:21:39.276 --rc geninfo_unexecuted_blocks=1 00:21:39.276 00:21:39.276 ' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:39.276 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:39.277 19:29:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:39.277 ************************************ 00:21:39.277 START TEST nvmf_shutdown_tc1 00:21:39.277 ************************************ 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.277 19:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:45.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:45.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:45.851 Found net devices under 0000:86:00.0: cvl_0_0 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:45.851 Found net devices under 0000:86:00.1: cvl_0_1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.851 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:45.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:21:45.852 00:21:45.852 --- 10.0.0.2 ping statistics --- 00:21:45.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.852 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:45.852 00:21:45.852 --- 10.0.0.1 ping statistics --- 00:21:45.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.852 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:45.852 19:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=2157312 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 2157312 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2157312 ']' 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 [2024-10-17 19:29:09.067839] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:45.852 [2024-10-17 19:29:09.067882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.852 [2024-10-17 19:29:09.145458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.852 [2024-10-17 19:29:09.187292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.852 [2024-10-17 19:29:09.187332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.852 [2024-10-17 19:29:09.187339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.852 [2024-10-17 19:29:09.187344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.852 [2024-10-17 19:29:09.187349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.852 [2024-10-17 19:29:09.188949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.852 [2024-10-17 19:29:09.189041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.852 [2024-10-17 19:29:09.189147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.852 [2024-10-17 19:29:09.189149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 [2024-10-17 19:29:09.325390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.852 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:45.852 Malloc1 00:21:45.852 [2024-10-17 19:29:09.449420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.852 Malloc2 00:21:45.852 Malloc3 00:21:45.852 Malloc4 00:21:45.852 Malloc5 00:21:46.112 Malloc6 00:21:46.112 Malloc7 00:21:46.112 Malloc8 00:21:46.112 Malloc9 00:21:46.112 Malloc10 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2157557 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2157557 /var/tmp/bdevperf.sock 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2157557 ']' 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.112 { 00:21:46.112 "params": { 00:21:46.112 "name": "Nvme$subsystem", 00:21:46.112 "trtype": "$TEST_TRANSPORT", 00:21:46.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.112 "adrfam": "ipv4", 00:21:46.112 "trsvcid": "$NVMF_PORT", 00:21:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.112 "hdgst": ${hdgst:-false}, 00:21:46.112 "ddgst": ${ddgst:-false} 00:21:46.112 }, 00:21:46.112 "method": "bdev_nvme_attach_controller" 00:21:46.112 } 00:21:46.112 EOF 00:21:46.112 )") 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.112 { 00:21:46.112 "params": { 00:21:46.112 "name": "Nvme$subsystem", 00:21:46.112 "trtype": "$TEST_TRANSPORT", 00:21:46.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.112 "adrfam": "ipv4", 00:21:46.112 "trsvcid": "$NVMF_PORT", 00:21:46.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.112 "hdgst": ${hdgst:-false}, 00:21:46.112 "ddgst": ${ddgst:-false} 00:21:46.112 }, 00:21:46.112 "method": "bdev_nvme_attach_controller" 00:21:46.112 } 00:21:46.112 EOF 00:21:46.112 )") 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.112 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 [2024-10-17 19:29:09.927251] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:46.372 [2024-10-17 19:29:09.927298] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.372 "hdgst": ${hdgst:-false}, 00:21:46.372 "ddgst": ${ddgst:-false} 00:21:46.372 }, 00:21:46.372 "method": "bdev_nvme_attach_controller" 00:21:46.372 } 00:21:46.372 EOF 00:21:46.372 )") 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:46.372 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:46.372 { 00:21:46.372 "params": { 00:21:46.372 "name": "Nvme$subsystem", 00:21:46.372 "trtype": "$TEST_TRANSPORT", 00:21:46.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:46.372 "adrfam": "ipv4", 00:21:46.372 "trsvcid": "$NVMF_PORT", 00:21:46.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:46.373 "hdgst": ${hdgst:-false}, 00:21:46.373 "ddgst": ${ddgst:-false} 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 } 00:21:46.373 EOF 00:21:46.373 )") 00:21:46.373 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:46.373 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:46.373 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:46.373 19:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme1", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme2", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme3", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme4", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme5", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme6", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme7", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme8", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme9", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 },{ 00:21:46.373 "params": { 00:21:46.373 "name": "Nvme10", 00:21:46.373 "trtype": "tcp", 00:21:46.373 "traddr": "10.0.0.2", 00:21:46.373 "adrfam": "ipv4", 00:21:46.373 "trsvcid": "4420", 00:21:46.373 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:46.373 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:46.373 "hdgst": false, 00:21:46.373 "ddgst": false 00:21:46.373 }, 00:21:46.373 "method": "bdev_nvme_attach_controller" 00:21:46.373 }' 00:21:46.373 [2024-10-17 19:29:10.004472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.373 [2024-10-17 19:29:10.047691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2157557 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:48.328 19:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:49.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2157557 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2157312 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.267 )") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.267 )") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.267 )") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.267 )") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.267 )") 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.267 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.267 { 00:21:49.267 "params": { 00:21:49.267 "name": "Nvme$subsystem", 00:21:49.267 "trtype": "$TEST_TRANSPORT", 00:21:49.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.267 "adrfam": "ipv4", 00:21:49.267 "trsvcid": "$NVMF_PORT", 00:21:49.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.267 "hdgst": ${hdgst:-false}, 00:21:49.267 "ddgst": ${ddgst:-false} 00:21:49.267 }, 00:21:49.267 "method": "bdev_nvme_attach_controller" 00:21:49.267 } 00:21:49.267 EOF 00:21:49.268 )") 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.268 { 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme$subsystem", 00:21:49.268 "trtype": "$TEST_TRANSPORT", 00:21:49.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "$NVMF_PORT", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.268 "hdgst": ${hdgst:-false}, 00:21:49.268 "ddgst": ${ddgst:-false} 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 } 00:21:49.268 EOF 00:21:49.268 )") 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.268 [2024-10-17 19:29:12.850893] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:49.268 [2024-10-17 19:29:12.850944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2158066 ] 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.268 { 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme$subsystem", 00:21:49.268 "trtype": "$TEST_TRANSPORT", 00:21:49.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "$NVMF_PORT", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.268 "hdgst": ${hdgst:-false}, 00:21:49.268 "ddgst": ${ddgst:-false} 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 } 00:21:49.268 EOF 00:21:49.268 )") 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.268 { 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme$subsystem", 00:21:49.268 "trtype": "$TEST_TRANSPORT", 00:21:49.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "$NVMF_PORT", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.268 "hdgst": ${hdgst:-false}, 00:21:49.268 "ddgst": ${ddgst:-false} 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 } 00:21:49.268 EOF 00:21:49.268 )") 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:49.268 { 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme$subsystem", 00:21:49.268 "trtype": "$TEST_TRANSPORT", 00:21:49.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "$NVMF_PORT", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.268 "hdgst": ${hdgst:-false}, 00:21:49.268 "ddgst": ${ddgst:-false} 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 } 00:21:49.268 EOF 00:21:49.268 )") 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:21:49.268 19:29:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme1", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme2", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme3", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme4", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme5", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme6", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme7", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme8", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme9", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 },{ 00:21:49.268 "params": { 00:21:49.268 "name": "Nvme10", 00:21:49.268 "trtype": "tcp", 00:21:49.268 "traddr": "10.0.0.2", 00:21:49.268 "adrfam": "ipv4", 00:21:49.268 "trsvcid": "4420", 00:21:49.268 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.268 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.268 "hdgst": false, 00:21:49.268 "ddgst": false 00:21:49.268 }, 00:21:49.268 "method": "bdev_nvme_attach_controller" 00:21:49.268 }' 00:21:49.268 [2024-10-17 19:29:12.927370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.268 [2024-10-17 19:29:12.968221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.648 Running I/O for 1 seconds... 00:21:51.844 2255.00 IOPS, 140.94 MiB/s 00:21:51.844 Latency(us) 00:21:51.844 [2024-10-17T17:29:15.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.844 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme1n1 : 1.10 232.44 14.53 0.00 0.00 272567.10 18100.42 230686.72 00:21:51.844 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme2n1 : 1.05 246.52 15.41 0.00 0.00 251988.74 3417.23 226692.14 00:21:51.844 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme3n1 : 1.08 301.21 18.83 0.00 0.00 203043.44 4743.56 206719.27 00:21:51.844 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme4n1 : 1.11 288.18 18.01 0.00 0.00 210417.66 13606.52 218702.99 00:21:51.844 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme5n1 : 1.12 284.62 17.79 0.00 0.00 210039.76 16727.28 217704.35 00:21:51.844 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme6n1 : 1.12 286.95 17.93 0.00 0.00 205175.66 16852.11 203723.34 00:21:51.844 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme7n1 : 1.12 289.31 18.08 0.00 0.00 200273.93 1755.43 214708.42 00:21:51.844 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme8n1 : 1.13 284.16 17.76 0.00 0.00 201200.40 13107.20 224694.86 00:21:51.844 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme9n1 : 1.13 282.97 17.69 0.00 0.00 199143.72 15229.32 216705.71 00:21:51.844 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:51.844 Verification LBA range: start 0x0 length 0x400 00:21:51.844 Nvme10n1 : 1.16 330.03 20.63 0.00 0.00 168736.65 3900.95 231685.36 00:21:51.844 [2024-10-17T17:29:15.628Z] =================================================================================================================== 00:21:51.844 [2024-10-17T17:29:15.628Z] Total : 2826.38 176.65 0.00 0.00 209361.61 1755.43 231685.36 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.103 rmmod nvme_tcp 00:21:52.103 rmmod nvme_fabrics 00:21:52.103 rmmod nvme_keyring 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 2157312 ']' 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 2157312 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2157312 ']' 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2157312 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2157312 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2157312' 00:21:52.103 killing process with pid 2157312 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2157312 00:21:52.103 19:29:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2157312 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.671 19:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.587 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:54.588 00:21:54.588 real 0m15.208s 00:21:54.588 user 0m33.741s 00:21:54.588 sys 0m5.772s 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 ************************************ 00:21:54.588 END TEST nvmf_shutdown_tc1 00:21:54.588 ************************************ 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 ************************************ 00:21:54.588 START TEST nvmf_shutdown_tc2 00:21:54.588 ************************************ 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:54.588 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:54.588 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:54.588 Found net devices under 0000:86:00.0: cvl_0_0 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:54.588 Found net devices under 0000:86:00.1: cvl_0_1 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:21:54.588 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:54.589 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:54.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:21:54.848 00:21:54.848 --- 10.0.0.2 ping statistics --- 00:21:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.848 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:54.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:21:54.848 00:21:54.848 --- 10.0.0.1 ping statistics --- 00:21:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.848 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2159109 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2159109 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2159109 ']' 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.848 19:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:55.108 [2024-10-17 19:29:18.684848] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:55.108 [2024-10-17 19:29:18.684889] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.108 [2024-10-17 19:29:18.764341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.108 [2024-10-17 19:29:18.805968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.108 [2024-10-17 19:29:18.806008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.108 [2024-10-17 19:29:18.806016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.108 [2024-10-17 19:29:18.806022] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.108 [2024-10-17 19:29:18.806027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.108 [2024-10-17 19:29:18.807595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.108 [2024-10-17 19:29:18.807703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.108 [2024-10-17 19:29:18.807814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.108 [2024-10-17 19:29:18.807815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:56.045 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:56.045 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:56.045 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.046 [2024-10-17 19:29:19.554031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.046 19:29:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.046 Malloc1 00:21:56.046 [2024-10-17 19:29:19.671039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.046 Malloc2 00:21:56.046 Malloc3 00:21:56.046 Malloc4 00:21:56.046 Malloc5 00:21:56.304 Malloc6 00:21:56.304 Malloc7 00:21:56.304 Malloc8 00:21:56.304 Malloc9 00:21:56.304 Malloc10 00:21:56.304 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:56.304 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:56.304 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:56.304 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2159387 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2159387 /var/tmp/bdevperf.sock 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2159387 ']' 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 [2024-10-17 19:29:20.147795] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:21:56.564 [2024-10-17 19:29:20.147843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159387 ] 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:56.564 { 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme$subsystem", 00:21:56.564 "trtype": "$TEST_TRANSPORT", 00:21:56.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "$NVMF_PORT", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.564 "hdgst": ${hdgst:-false}, 00:21:56.564 "ddgst": ${ddgst:-false} 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 } 00:21:56.564 EOF 00:21:56.564 )") 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:21:56.564 19:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme1", 00:21:56.564 "trtype": "tcp", 00:21:56.564 "traddr": "10.0.0.2", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "4420", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.564 "hdgst": false, 00:21:56.564 "ddgst": false 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 },{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme2", 00:21:56.564 "trtype": "tcp", 00:21:56.564 "traddr": "10.0.0.2", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "4420", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:56.564 "hdgst": false, 00:21:56.564 "ddgst": false 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 },{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme3", 00:21:56.564 "trtype": "tcp", 00:21:56.564 "traddr": "10.0.0.2", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "4420", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:56.564 "hdgst": false, 00:21:56.564 "ddgst": false 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 },{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme4", 00:21:56.564 "trtype": "tcp", 00:21:56.564 "traddr": "10.0.0.2", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "4420", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:56.564 "hdgst": false, 00:21:56.564 "ddgst": false 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 },{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme5", 00:21:56.564 "trtype": "tcp", 00:21:56.564 "traddr": "10.0.0.2", 00:21:56.564 "adrfam": "ipv4", 00:21:56.564 "trsvcid": "4420", 00:21:56.564 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:56.564 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:56.564 "hdgst": false, 00:21:56.564 "ddgst": false 00:21:56.564 }, 00:21:56.564 "method": "bdev_nvme_attach_controller" 00:21:56.564 },{ 00:21:56.564 "params": { 00:21:56.564 "name": "Nvme6", 00:21:56.565 "trtype": "tcp", 00:21:56.565 "traddr": "10.0.0.2", 00:21:56.565 "adrfam": "ipv4", 00:21:56.565 "trsvcid": "4420", 00:21:56.565 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:56.565 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:56.565 "hdgst": false, 00:21:56.565 "ddgst": false 00:21:56.565 }, 00:21:56.565 "method": "bdev_nvme_attach_controller" 00:21:56.565 },{ 00:21:56.565 "params": { 00:21:56.565 "name": "Nvme7", 00:21:56.565 "trtype": "tcp", 00:21:56.565 "traddr": "10.0.0.2", 00:21:56.565 "adrfam": "ipv4", 00:21:56.565 "trsvcid": "4420", 00:21:56.565 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:56.565 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:56.565 "hdgst": false, 00:21:56.565 "ddgst": false 00:21:56.565 }, 00:21:56.565 "method": "bdev_nvme_attach_controller" 00:21:56.565 },{ 00:21:56.565 "params": { 00:21:56.565 "name": "Nvme8", 00:21:56.565 "trtype": "tcp", 00:21:56.565 "traddr": "10.0.0.2", 00:21:56.565 "adrfam": "ipv4", 00:21:56.565 "trsvcid": "4420", 00:21:56.565 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:56.565 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:56.565 "hdgst": false, 00:21:56.565 "ddgst": false 00:21:56.565 }, 00:21:56.565 "method": "bdev_nvme_attach_controller" 00:21:56.565 },{ 00:21:56.565 "params": { 00:21:56.565 "name": "Nvme9", 00:21:56.565 "trtype": "tcp", 00:21:56.565 "traddr": "10.0.0.2", 00:21:56.565 "adrfam": "ipv4", 00:21:56.565 "trsvcid": "4420", 00:21:56.565 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:56.565 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:56.565 "hdgst": false, 00:21:56.565 "ddgst": false 00:21:56.565 }, 00:21:56.565 "method": "bdev_nvme_attach_controller" 00:21:56.565 },{ 00:21:56.565 "params": { 00:21:56.565 "name": "Nvme10", 00:21:56.565 "trtype": "tcp", 00:21:56.565 "traddr": "10.0.0.2", 00:21:56.565 "adrfam": "ipv4", 00:21:56.565 "trsvcid": "4420", 00:21:56.565 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:56.565 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:56.565 "hdgst": false, 00:21:56.565 "ddgst": false 00:21:56.565 }, 00:21:56.565 "method": "bdev_nvme_attach_controller" 00:21:56.565 }' 00:21:56.565 [2024-10-17 19:29:20.223878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.565 [2024-10-17 19:29:20.264872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.469 Running I/O for 10 seconds... 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:58.469 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2159387 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2159387 ']' 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2159387 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2159387 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2159387' 00:21:58.729 killing process with pid 2159387 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2159387 00:21:58.729 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2159387 00:21:58.729 Received shutdown signal, test time was about 0.703998 seconds 00:21:58.729 00:21:58.729 Latency(us) 00:21:58.729 [2024-10-17T17:29:22.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.729 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme1n1 : 0.69 279.24 17.45 0.00 0.00 226088.80 16227.96 210713.84 00:21:58.729 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme2n1 : 0.68 282.07 17.63 0.00 0.00 218261.54 25839.91 177758.60 00:21:58.729 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme3n1 : 0.70 365.30 22.83 0.00 0.00 165123.90 13981.01 202724.69 00:21:58.729 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme4n1 : 0.68 284.06 17.75 0.00 0.00 206334.54 14230.67 207717.91 00:21:58.729 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme5n1 : 0.70 275.60 17.22 0.00 0.00 208000.08 17101.78 230686.72 00:21:58.729 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme6n1 : 0.70 275.92 17.24 0.00 0.00 203290.98 17351.44 199728.76 00:21:58.729 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme7n1 : 0.67 285.34 17.83 0.00 0.00 190488.38 17850.76 210713.84 00:21:58.729 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme8n1 : 0.68 280.74 17.55 0.00 0.00 188902.32 13856.18 195734.19 00:21:58.729 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme9n1 : 0.66 193.23 12.08 0.00 0.00 265236.48 18974.23 240673.16 00:21:58.729 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:58.729 Verification LBA range: start 0x0 length 0x400 00:21:58.729 Nvme10n1 : 0.70 272.98 17.06 0.00 0.00 184961.71 26464.06 228689.43 00:21:58.729 [2024-10-17T17:29:22.513Z] =================================================================================================================== 00:21:58.729 [2024-10-17T17:29:22.513Z] Total : 2794.49 174.66 0.00 0.00 202331.79 13856.18 240673.16 00:21:58.988 19:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2159109 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:59.925 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:59.925 rmmod nvme_tcp 00:22:00.184 rmmod nvme_fabrics 00:22:00.184 rmmod nvme_keyring 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 2159109 ']' 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 2159109 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2159109 ']' 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2159109 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2159109 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2159109' 00:22:00.184 killing process with pid 2159109 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2159109 00:22:00.184 19:29:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2159109 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.443 19:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:02.981 00:22:02.981 real 0m7.915s 00:22:02.981 user 0m23.963s 00:22:02.981 sys 0m1.335s 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:02.981 ************************************ 00:22:02.981 END TEST nvmf_shutdown_tc2 00:22:02.981 ************************************ 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:02.981 ************************************ 00:22:02.981 START TEST nvmf_shutdown_tc3 00:22:02.981 ************************************ 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.981 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.981 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.981 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.981 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:02.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:22:02.982 00:22:02.982 --- 10.0.0.2 ping statistics --- 00:22:02.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.982 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:22:02.982 00:22:02.982 --- 10.0.0.1 ping statistics --- 00:22:02.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.982 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=2160450 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 2160450 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2160450 ']' 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.982 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:02.982 [2024-10-17 19:29:26.680306] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:02.982 [2024-10-17 19:29:26.680352] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.982 [2024-10-17 19:29:26.758325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.242 [2024-10-17 19:29:26.801026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.242 [2024-10-17 19:29:26.801061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.242 [2024-10-17 19:29:26.801068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.242 [2024-10-17 19:29:26.801074] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.242 [2024-10-17 19:29:26.801079] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.242 [2024-10-17 19:29:26.802568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.242 [2024-10-17 19:29:26.802679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.242 [2024-10-17 19:29:26.802785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.242 [2024-10-17 19:29:26.802786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.242 [2024-10-17 19:29:26.939330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.242 19:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.501 Malloc1 00:22:03.501 [2024-10-17 19:29:27.054172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.501 Malloc2 00:22:03.501 Malloc3 00:22:03.501 Malloc4 00:22:03.501 Malloc5 00:22:03.501 Malloc6 00:22:03.501 Malloc7 00:22:03.761 Malloc8 00:22:03.761 Malloc9 00:22:03.761 Malloc10 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2160713 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2160713 /var/tmp/bdevperf.sock 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2160713 ']' 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.761 "name": "Nvme$subsystem", 00:22:03.761 "trtype": "$TEST_TRANSPORT", 00:22:03.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.761 "adrfam": "ipv4", 00:22:03.761 "trsvcid": "$NVMF_PORT", 00:22:03.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.761 "hdgst": ${hdgst:-false}, 00:22:03.761 "ddgst": ${ddgst:-false} 00:22:03.761 }, 00:22:03.761 "method": "bdev_nvme_attach_controller" 00:22:03.761 } 00:22:03.761 EOF 00:22:03.761 )") 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.761 "name": "Nvme$subsystem", 00:22:03.761 "trtype": "$TEST_TRANSPORT", 00:22:03.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.761 "adrfam": "ipv4", 00:22:03.761 "trsvcid": "$NVMF_PORT", 00:22:03.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.761 "hdgst": ${hdgst:-false}, 00:22:03.761 "ddgst": ${ddgst:-false} 00:22:03.761 }, 00:22:03.761 "method": "bdev_nvme_attach_controller" 00:22:03.761 } 00:22:03.761 EOF 00:22:03.761 )") 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.761 "name": "Nvme$subsystem", 00:22:03.761 "trtype": "$TEST_TRANSPORT", 00:22:03.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.761 "adrfam": "ipv4", 00:22:03.761 "trsvcid": "$NVMF_PORT", 00:22:03.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.761 "hdgst": ${hdgst:-false}, 00:22:03.761 "ddgst": ${ddgst:-false} 00:22:03.761 }, 00:22:03.761 "method": "bdev_nvme_attach_controller" 00:22:03.761 } 00:22:03.761 EOF 00:22:03.761 )") 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.761 "name": "Nvme$subsystem", 00:22:03.761 "trtype": "$TEST_TRANSPORT", 00:22:03.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.761 "adrfam": "ipv4", 00:22:03.761 "trsvcid": "$NVMF_PORT", 00:22:03.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.761 "hdgst": ${hdgst:-false}, 00:22:03.761 "ddgst": ${ddgst:-false} 00:22:03.761 }, 00:22:03.761 "method": "bdev_nvme_attach_controller" 00:22:03.761 } 00:22:03.761 EOF 00:22:03.761 )") 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.761 "name": "Nvme$subsystem", 00:22:03.761 "trtype": "$TEST_TRANSPORT", 00:22:03.761 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.761 "adrfam": "ipv4", 00:22:03.761 "trsvcid": "$NVMF_PORT", 00:22:03.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.761 "hdgst": ${hdgst:-false}, 00:22:03.761 "ddgst": ${ddgst:-false} 00:22:03.761 }, 00:22:03.761 "method": "bdev_nvme_attach_controller" 00:22:03.761 } 00:22:03.761 EOF 00:22:03.761 )") 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.761 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.761 { 00:22:03.761 "params": { 00:22:03.762 "name": "Nvme$subsystem", 00:22:03.762 "trtype": "$TEST_TRANSPORT", 00:22:03.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.762 "adrfam": "ipv4", 00:22:03.762 "trsvcid": "$NVMF_PORT", 00:22:03.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.762 "hdgst": ${hdgst:-false}, 00:22:03.762 "ddgst": ${ddgst:-false} 00:22:03.762 }, 00:22:03.762 "method": "bdev_nvme_attach_controller" 00:22:03.762 } 00:22:03.762 EOF 00:22:03.762 )") 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.762 { 00:22:03.762 "params": { 00:22:03.762 "name": "Nvme$subsystem", 00:22:03.762 "trtype": "$TEST_TRANSPORT", 00:22:03.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.762 "adrfam": "ipv4", 00:22:03.762 "trsvcid": "$NVMF_PORT", 00:22:03.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.762 "hdgst": ${hdgst:-false}, 00:22:03.762 "ddgst": ${ddgst:-false} 00:22:03.762 }, 00:22:03.762 "method": "bdev_nvme_attach_controller" 00:22:03.762 } 00:22:03.762 EOF 00:22:03.762 )") 00:22:03.762 [2024-10-17 19:29:27.529804] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:03.762 [2024-10-17 19:29:27.529857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160713 ] 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.762 { 00:22:03.762 "params": { 00:22:03.762 "name": "Nvme$subsystem", 00:22:03.762 "trtype": "$TEST_TRANSPORT", 00:22:03.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.762 "adrfam": "ipv4", 00:22:03.762 "trsvcid": "$NVMF_PORT", 00:22:03.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.762 "hdgst": ${hdgst:-false}, 00:22:03.762 "ddgst": ${ddgst:-false} 00:22:03.762 }, 00:22:03.762 "method": "bdev_nvme_attach_controller" 00:22:03.762 } 00:22:03.762 EOF 00:22:03.762 )") 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:03.762 { 00:22:03.762 "params": { 00:22:03.762 "name": "Nvme$subsystem", 00:22:03.762 "trtype": "$TEST_TRANSPORT", 00:22:03.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.762 "adrfam": "ipv4", 00:22:03.762 "trsvcid": "$NVMF_PORT", 00:22:03.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.762 "hdgst": ${hdgst:-false}, 00:22:03.762 "ddgst": ${ddgst:-false} 00:22:03.762 }, 00:22:03.762 "method": "bdev_nvme_attach_controller" 00:22:03.762 } 00:22:03.762 EOF 00:22:03.762 )") 00:22:03.762 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:22:04.021 { 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme$subsystem", 00:22:04.021 "trtype": "$TEST_TRANSPORT", 00:22:04.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "$NVMF_PORT", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.021 "hdgst": ${hdgst:-false}, 00:22:04.021 "ddgst": ${ddgst:-false} 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 } 00:22:04.021 EOF 00:22:04.021 )") 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:22:04.021 19:29:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme1", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.021 "hdgst": false, 00:22:04.021 "ddgst": false 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 },{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme2", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.021 "hdgst": false, 00:22:04.021 "ddgst": false 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 },{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme3", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.021 "hdgst": false, 00:22:04.021 "ddgst": false 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 },{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme4", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.021 "hdgst": false, 00:22:04.021 "ddgst": false 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 },{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme5", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.021 "hdgst": false, 00:22:04.021 "ddgst": false 00:22:04.021 }, 00:22:04.021 "method": "bdev_nvme_attach_controller" 00:22:04.021 },{ 00:22:04.021 "params": { 00:22:04.021 "name": "Nvme6", 00:22:04.021 "trtype": "tcp", 00:22:04.021 "traddr": "10.0.0.2", 00:22:04.021 "adrfam": "ipv4", 00:22:04.021 "trsvcid": "4420", 00:22:04.021 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.021 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.022 "hdgst": false, 00:22:04.022 "ddgst": false 00:22:04.022 }, 00:22:04.022 "method": "bdev_nvme_attach_controller" 00:22:04.022 },{ 00:22:04.022 "params": { 00:22:04.022 "name": "Nvme7", 00:22:04.022 "trtype": "tcp", 00:22:04.022 "traddr": "10.0.0.2", 00:22:04.022 "adrfam": "ipv4", 00:22:04.022 "trsvcid": "4420", 00:22:04.022 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.022 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.022 "hdgst": false, 00:22:04.022 "ddgst": false 00:22:04.022 }, 00:22:04.022 "method": "bdev_nvme_attach_controller" 00:22:04.022 },{ 00:22:04.022 "params": { 00:22:04.022 "name": "Nvme8", 00:22:04.022 "trtype": "tcp", 00:22:04.022 "traddr": "10.0.0.2", 00:22:04.022 "adrfam": "ipv4", 00:22:04.022 "trsvcid": "4420", 00:22:04.022 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.022 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.022 "hdgst": false, 00:22:04.022 "ddgst": false 00:22:04.022 }, 00:22:04.022 "method": "bdev_nvme_attach_controller" 00:22:04.022 },{ 00:22:04.022 "params": { 00:22:04.022 "name": "Nvme9", 00:22:04.022 "trtype": "tcp", 00:22:04.022 "traddr": "10.0.0.2", 00:22:04.022 "adrfam": "ipv4", 00:22:04.022 "trsvcid": "4420", 00:22:04.022 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.022 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.022 "hdgst": false, 00:22:04.022 "ddgst": false 00:22:04.022 }, 00:22:04.022 "method": "bdev_nvme_attach_controller" 00:22:04.022 },{ 00:22:04.022 "params": { 00:22:04.022 "name": "Nvme10", 00:22:04.022 "trtype": "tcp", 00:22:04.022 "traddr": "10.0.0.2", 00:22:04.022 "adrfam": "ipv4", 00:22:04.022 "trsvcid": "4420", 00:22:04.022 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.022 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.022 "hdgst": false, 00:22:04.022 "ddgst": false 00:22:04.022 }, 00:22:04.022 "method": "bdev_nvme_attach_controller" 00:22:04.022 }' 00:22:04.022 [2024-10-17 19:29:27.607292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.022 [2024-10-17 19:29:27.647947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.925 Running I/O for 10 seconds... 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:05.925 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=84 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 84 -ge 100 ']' 00:22:06.185 19:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2160450 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2160450 ']' 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2160450 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2160450 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2160450' 00:22:06.465 killing process with pid 2160450 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2160450 00:22:06.465 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2160450 00:22:06.465 [2024-10-17 19:29:30.126845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.126998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.465 [2024-10-17 19:29:30.127232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.127289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31130 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.129166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d33b90 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31600 is same with the state(6) to be set 00:22:06.466 [2024-10-17 19:29:30.130853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.466 [2024-10-17 19:29:30.130978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.466 [2024-10-17 19:29:30.130985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.130993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.467 [2024-10-17 19:29:30.131617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.467 [2024-10-17 19:29:30.131624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.468 [2024-10-17 19:29:30.131849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131915] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d1abe0 was disconnected and fr[2024-10-17 19:29:30.131908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31ad0 is same with eed. reset controller. 00:22:06.468 the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.131935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31ad0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.131943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31ad0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.131951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31ad0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.131976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.131987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.131995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14e50 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b152b0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a710 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.468 [2024-10-17 19:29:30.132323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.468 [2024-10-17 19:29:30.132329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62770 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.132994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.468 [2024-10-17 19:29:30.133055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.133364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d31fc0 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.469 [2024-10-17 19:29:30.134032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f62770 (9): Bad file descriptor 00:22:06.469 [2024-10-17 19:29:30.134116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:1[2024-10-17 19:29:30.134175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.469 the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.469 [2024-10-17 19:29:30.134191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.469 [2024-10-17 19:29:30.134204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.469 [2024-10-17 19:29:30.134212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.469 [2024-10-17 19:29:30.134227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.469 [2024-10-17 19:29:30.134235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.469 [2024-10-17 19:29:30.134243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.469 [2024-10-17 19:29:30.134250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.469 [2024-10-17 19:29:30.134255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with [2024-10-17 19:29:30.134272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:12the state(6) to be set 00:22:06.470 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with [2024-10-17 19:29:30.134283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:06.470 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with [2024-10-17 19:29:30.134366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:12the state(6) to be set 00:22:06.470 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:12[2024-10-17 19:29:30.134406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.134415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with [2024-10-17 19:29:30.134521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:12the state(6) to be set 00:22:06.470 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with [2024-10-17 19:29:30.134544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:12the state(6) to be set 00:22:06.470 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:12[2024-10-17 19:29:30.134581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.134590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 the state(6) to be set 00:22:06.470 [2024-10-17 19:29:30.134617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.470 [2024-10-17 19:29:30.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.470 [2024-10-17 19:29:30.134666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.134988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.134997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.471 [2024-10-17 19:29:30.135201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.471 [2024-10-17 19:29:30.135556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.471 [2024-10-17 19:29:30.135650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.135963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32810 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.472 [2024-10-17 19:29:30.137305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.137453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d32ce0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.138418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.149359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.473 [2024-10-17 19:29:30.149370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.473 [2024-10-17 19:29:30.149387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.473 [2024-10-17 19:29:30.149402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149465] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2042220 was disconnected and freed. reset controller. 00:22:06.473 [2024-10-17 19:29:30.149808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14e50 (9): Bad file descriptor 00:22:06.473 [2024-10-17 19:29:30.149855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.149869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.149889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.149907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.149928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11690 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.149970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.149992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.150001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.150011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.150024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.150034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.150043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.150051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f363c0 is same with the state(6) to be set 00:22:06.473 [2024-10-17 19:29:30.150092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.150103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.473 [2024-10-17 19:29:30.150113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.473 [2024-10-17 19:29:30.150122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39910 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.150199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29610 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.150306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.474 [2024-10-17 19:29:30.150376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.474 [2024-10-17 19:29:30.150384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a530 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.150402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b152b0 (9): Bad file descriptor 00:22:06.474 [2024-10-17 19:29:30.150420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a710 (9): Bad file descriptor 00:22:06.474 [2024-10-17 19:29:30.152353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.474 [2024-10-17 19:29:30.152581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.474 [2024-10-17 19:29:30.152616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f62770 with addr=10.0.0.2, port=4420 00:22:06.474 [2024-10-17 19:29:30.152628] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62770 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.474 [2024-10-17 19:29:30.153161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a710 with addr=10.0.0.2, port=4420 00:22:06.474 [2024-10-17 19:29:30.153171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a710 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f62770 (9): Bad file descriptor 00:22:06.474 [2024-10-17 19:29:30.153232] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.153285] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.153902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a710 (9): Bad file descriptor 00:22:06.474 [2024-10-17 19:29:30.153906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with [2024-10-17 19:29:30.153922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in errorthe state(6) to be set 00:22:06.474 state 00:22:06.474 [2024-10-17 19:29:30.153933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.474 [2024-10-17 19:29:30.153943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.474 [2024-10-17 19:29:30.153951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.153996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154050] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.154057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154103] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.154106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154152] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.154156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154203] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.474 [2024-10-17 19:29:30.154207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d331d0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.154317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.474 [2024-10-17 19:29:30.154330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.474 [2024-10-17 19:29:30.154338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.474 [2024-10-17 19:29:30.154349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.474 [2024-10-17 19:29:30.154517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.474 [2024-10-17 19:29:30.154990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.155008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.474 [2024-10-17 19:29:30.155015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with [2024-10-17 19:29:30.155069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:06.475 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-10-17 19:29:30.155087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with [2024-10-17 19:29:30.155110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128the state(6) to be set 00:22:06.475 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with [2024-10-17 19:29:30.155168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:06.475 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.155194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with [2024-10-17 19:29:30.155222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:06.475 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.155247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.155296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.155319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-10-17 19:29:30.155387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 [2024-10-17 19:29:30.155403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.475 [2024-10-17 19:29:30.155417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12[2024-10-17 19:29:30.155424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.475 the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.475 [2024-10-17 19:29:30.155434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12[2024-10-17 19:29:30.155448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d336a0 is same with the state(6) to be set 00:22:06.476 [2024-10-17 19:29:30.155483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.155985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.155994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.476 [2024-10-17 19:29:30.156262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.476 [2024-10-17 19:29:30.156272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.156434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.156446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e63b00 is same with the state(6) to be set 00:22:06.477 [2024-10-17 19:29:30.156504] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2e63b00 was disconnected and freed. reset controller. 00:22:06.477 [2024-10-17 19:29:30.157856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.477 [2024-10-17 19:29:30.157881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39910 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.158568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.477 [2024-10-17 19:29:30.158590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39910 with addr=10.0.0.2, port=4420 00:22:06.477 [2024-10-17 19:29:30.158607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39910 is same with the state(6) to be set 00:22:06.477 [2024-10-17 19:29:30.158695] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:06.477 [2024-10-17 19:29:30.158713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39910 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.158774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.477 [2024-10-17 19:29:30.158785] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.477 [2024-10-17 19:29:30.158795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.477 [2024-10-17 19:29:30.158842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.477 [2024-10-17 19:29:30.159812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11690 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.159834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f363c0 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.159870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.477 [2024-10-17 19:29:30.159883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.159894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.477 [2024-10-17 19:29:30.159903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.159913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.477 [2024-10-17 19:29:30.159921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.159932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.477 [2024-10-17 19:29:30.159940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.159949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63350 is same with the state(6) to be set 00:22:06.477 [2024-10-17 19:29:30.159970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a29610 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.159990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a530 (9): Bad file descriptor 00:22:06.477 [2024-10-17 19:29:30.160118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.477 [2024-10-17 19:29:30.160408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.477 [2024-10-17 19:29:30.160415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.160984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.160991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.478 [2024-10-17 19:29:30.161103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.478 [2024-10-17 19:29:30.161112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.161119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.161128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.161135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.161143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.161150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.161159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.161167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.161175] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x203f9e0 is same with the state(6) to be set 00:22:06.479 [2024-10-17 19:29:30.162242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.479 [2024-10-17 19:29:30.162907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.479 [2024-10-17 19:29:30.162914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.162925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.162933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.162942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.162958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.162965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.162974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.162982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.162991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.162998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.480 [2024-10-17 19:29:30.163343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.480 [2024-10-17 19:29:30.163351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2040d30 is same with the state(6) to be set 00:22:06.480 [2024-10-17 19:29:30.164386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.480 [2024-10-17 19:29:30.164400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.480 [2024-10-17 19:29:30.164468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.480 [2024-10-17 19:29:30.164644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.480 [2024-10-17 19:29:30.164659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b152b0 with addr=10.0.0.2, port=4420 00:22:06.480 [2024-10-17 19:29:30.164668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b152b0 is same with the state(6) to be set 00:22:06.480 [2024-10-17 19:29:30.164865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.480 [2024-10-17 19:29:30.164876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b14e50 with addr=10.0.0.2, port=4420 00:22:06.480 [2024-10-17 19:29:30.164884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14e50 is same with the state(6) to be set 00:22:06.480 [2024-10-17 19:29:30.165364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.480 [2024-10-17 19:29:30.165580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.480 [2024-10-17 19:29:30.165592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f62770 with addr=10.0.0.2, port=4420 00:22:06.480 [2024-10-17 19:29:30.165599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62770 is same with the state(6) to be set 00:22:06.480 [2024-10-17 19:29:30.165613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b152b0 (9): Bad file descriptor 00:22:06.480 [2024-10-17 19:29:30.165624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14e50 (9): Bad file descriptor 00:22:06.480 [2024-10-17 19:29:30.165856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.480 [2024-10-17 19:29:30.165869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a710 with addr=10.0.0.2, port=4420 00:22:06.480 [2024-10-17 19:29:30.165876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a710 is same with the state(6) to be set 00:22:06.480 [2024-10-17 19:29:30.165887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f62770 (9): Bad file descriptor 00:22:06.480 [2024-10-17 19:29:30.165895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.480 [2024-10-17 19:29:30.165903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.480 [2024-10-17 19:29:30.165911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.480 [2024-10-17 19:29:30.165923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.480 [2024-10-17 19:29:30.165930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.480 [2024-10-17 19:29:30.165937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.480 [2024-10-17 19:29:30.165979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.480 [2024-10-17 19:29:30.165991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.480 [2024-10-17 19:29:30.165999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a710 (9): Bad file descriptor 00:22:06.480 [2024-10-17 19:29:30.166007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.480 [2024-10-17 19:29:30.166014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.480 [2024-10-17 19:29:30.166022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.480 [2024-10-17 19:29:30.166052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.480 [2024-10-17 19:29:30.166060] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.480 [2024-10-17 19:29:30.166066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.480 [2024-10-17 19:29:30.166073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.480 [2024-10-17 19:29:30.166103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.481 [2024-10-17 19:29:30.168072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.481 [2024-10-17 19:29:30.168354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.481 [2024-10-17 19:29:30.168370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39910 with addr=10.0.0.2, port=4420 00:22:06.481 [2024-10-17 19:29:30.168378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39910 is same with the state(6) to be set 00:22:06.481 [2024-10-17 19:29:30.168407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39910 (9): Bad file descriptor 00:22:06.481 [2024-10-17 19:29:30.168444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.481 [2024-10-17 19:29:30.168454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.481 [2024-10-17 19:29:30.168462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.481 [2024-10-17 19:29:30.168492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.481 [2024-10-17 19:29:30.169844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63350 (9): Bad file descriptor 00:22:06.481 [2024-10-17 19:29:30.169948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.169961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.169973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.169981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.169991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.169998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.481 [2024-10-17 19:29:30.170518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.481 [2024-10-17 19:29:30.170527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.170983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.170991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2043710 is same with the state(6) to be set 00:22:06.482 [2024-10-17 19:29:30.171976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.171992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.482 [2024-10-17 19:29:30.172136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.482 [2024-10-17 19:29:30.172143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.483 [2024-10-17 19:29:30.172812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.483 [2024-10-17 19:29:30.172819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.172992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.172999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.173008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.173015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.173022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2044740 is same with the state(6) to be set 00:22:06.484 [2024-10-17 19:29:30.174002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.484 [2024-10-17 19:29:30.174470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.484 [2024-10-17 19:29:30.174481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.174984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.174992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.175001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.175008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.175017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.175024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.175032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.175041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.175049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29c8340 is same with the state(6) to be set 00:22:06.485 [2024-10-17 19:29:30.176028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.485 [2024-10-17 19:29:30.176135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.485 [2024-10-17 19:29:30.176142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.486 [2024-10-17 19:29:30.176751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.486 [2024-10-17 19:29:30.176758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.176985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.176993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.177001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.177008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.177017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.177031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.177047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.487 [2024-10-17 19:29:30.177063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.487 [2024-10-17 19:29:30.177070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c15ed0 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.178028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:06.487 [2024-10-17 19:29:30.178046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.487 [2024-10-17 19:29:30.178063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.487 [2024-10-17 19:29:30.178072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.487 [2024-10-17 19:29:30.178431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.178447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11690 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.178456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11690 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.178592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.178614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3a530 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.178623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a530 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.178768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.178779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f363c0 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.178787] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f363c0 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.178908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.178919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a29610 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.178927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29610 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.179817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.487 [2024-10-17 19:29:30.179835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.487 [2024-10-17 19:29:30.179844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.487 [2024-10-17 19:29:30.179854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.487 [2024-10-17 19:29:30.179888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11690 (9): Bad file descriptor 00:22:06.487 [2024-10-17 19:29:30.179900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a530 (9): Bad file descriptor 00:22:06.487 [2024-10-17 19:29:30.179908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f363c0 (9): Bad file descriptor 00:22:06.487 [2024-10-17 19:29:30.179928] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a29610 (9): Bad file descriptor 00:22:06.487 [2024-10-17 19:29:30.179968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:06.487 [2024-10-17 19:29:30.180230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.180244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b14e50 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.180252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14e50 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.180488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.180501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b152b0 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.180509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b152b0 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.180611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.180623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f62770 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.180631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62770 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.180845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.487 [2024-10-17 19:29:30.180857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a710 with addr=10.0.0.2, port=4420 00:22:06.487 [2024-10-17 19:29:30.180865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a710 is same with the state(6) to be set 00:22:06.487 [2024-10-17 19:29:30.180873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:06.487 [2024-10-17 19:29:30.180880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:06.487 [2024-10-17 19:29:30.180888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.487 [2024-10-17 19:29:30.180898] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.487 [2024-10-17 19:29:30.180905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.487 [2024-10-17 19:29:30.180911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.487 [2024-10-17 19:29:30.180920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.487 [2024-10-17 19:29:30.180926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.487 [2024-10-17 19:29:30.180934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.487 [2024-10-17 19:29:30.180943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.487 [2024-10-17 19:29:30.180949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.487 [2024-10-17 19:29:30.180956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.487 [2024-10-17 19:29:30.181008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.487 [2024-10-17 19:29:30.181026] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.487 [2024-10-17 19:29:30.181033] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.487 [2024-10-17 19:29:30.181039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.488 [2024-10-17 19:29:30.181184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.488 [2024-10-17 19:29:30.181199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f39910 with addr=10.0.0.2, port=4420 00:22:06.488 [2024-10-17 19:29:30.181207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f39910 is same with the state(6) to be set 00:22:06.488 [2024-10-17 19:29:30.181217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14e50 (9): Bad file descriptor 00:22:06.488 [2024-10-17 19:29:30.181227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b152b0 (9): Bad file descriptor 00:22:06.488 [2024-10-17 19:29:30.181236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f62770 (9): Bad file descriptor 00:22:06.488 [2024-10-17 19:29:30.181244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a710 (9): Bad file descriptor 00:22:06.488 [2024-10-17 19:29:30.181297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.488 [2024-10-17 19:29:30.181936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.488 [2024-10-17 19:29:30.181945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.181953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.181960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.181969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.181975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.181984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.181991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.181999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:06.489 [2024-10-17 19:29:30.182355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.489 [2024-10-17 19:29:30.182363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d196b0 is same with the state(6) to be set 00:22:06.489 task offset: 29696 on job bdev=Nvme10n1 fails 00:22:06.489 00:22:06.489 Latency(us) 00:22:06.489 [2024-10-17T17:29:30.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:06.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme1n1 ended in about 0.95 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme1n1 : 0.95 201.86 12.62 67.29 0.00 235519.51 16477.62 195734.19 00:22:06.489 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme2n1 ended in about 0.95 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme2n1 : 0.95 201.40 12.59 67.13 0.00 232142.38 15291.73 216705.71 00:22:06.489 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme3n1 ended in about 0.94 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme3n1 : 0.94 284.92 17.81 68.04 0.00 173524.42 13294.45 209715.20 00:22:06.489 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme4n1 ended in about 0.96 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme4n1 : 0.96 199.80 12.49 66.60 0.00 226322.04 14417.92 245666.38 00:22:06.489 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme5n1 ended in about 0.96 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme5n1 : 0.96 199.38 12.46 66.46 0.00 223039.39 17850.76 216705.71 00:22:06.489 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme6n1 : 0.96 198.97 12.44 66.32 0.00 219657.02 18599.74 214708.42 00:22:06.489 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme7n1 ended in about 0.97 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme7n1 : 0.97 198.55 12.41 66.18 0.00 216325.61 18225.25 211712.49 00:22:06.489 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme8n1 ended in about 0.95 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme8n1 : 0.95 202.82 12.68 67.61 0.00 207360.73 25590.25 207717.91 00:22:06.489 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme9n1 ended in about 0.97 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme9n1 : 0.97 202.61 12.66 65.82 0.00 205912.89 16477.62 224694.86 00:22:06.489 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:06.489 Job: Nvme10n1 ended in about 0.92 seconds with error 00:22:06.489 Verification LBA range: start 0x0 length 0x400 00:22:06.489 Nvme10n1 : 0.92 207.99 13.00 69.33 0.00 193816.56 5835.82 243669.09 00:22:06.489 [2024-10-17T17:29:30.273Z] =================================================================================================================== 00:22:06.489 [2024-10-17T17:29:30.273Z] Total : 2098.29 131.14 670.78 0.00 212201.55 5835.82 245666.38 00:22:06.489 [2024-10-17 19:29:30.214036] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:06.489 [2024-10-17 19:29:30.214088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:06.489 [2024-10-17 19:29:30.214128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f39910 (9): Bad file descriptor 00:22:06.489 [2024-10-17 19:29:30.214142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.489 [2024-10-17 19:29:30.214150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.489 [2024-10-17 19:29:30.214159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.490 [2024-10-17 19:29:30.214173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.214180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.214187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.490 [2024-10-17 19:29:30.214197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.214204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.214211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.490 [2024-10-17 19:29:30.214223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.214230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.214237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.490 [2024-10-17 19:29:30.214315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.214325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.214331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.214336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.214617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.214634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f63350 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.214646] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f63350 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.214654] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.214660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.214667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:06.490 [2024-10-17 19:29:30.214739] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:06.490 [2024-10-17 19:29:30.215014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.215043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f63350 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.215084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215096] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215105] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.215160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.215167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:06.490 [2024-10-17 19:29:30.215194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:06.490 [2024-10-17 19:29:30.215226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.215493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.215508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a29610 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.215517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a29610 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.215668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.215680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f363c0 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.215688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f363c0 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.215918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.215928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3a530 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.215936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f3a530 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.216128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.216140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b11690 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.216148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b11690 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.216310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.216321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b0a710 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.216328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b0a710 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.216592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.216607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f62770 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.216615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f62770 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.216809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.216819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b152b0 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.216826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b152b0 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.216996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.490 [2024-10-17 19:29:30.217007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b14e50 with addr=10.0.0.2, port=4420 00:22:06.490 [2024-10-17 19:29:30.217015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14e50 is same with the state(6) to be set 00:22:06.490 [2024-10-17 19:29:30.217024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a29610 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f363c0 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a530 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b11690 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0a710 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f62770 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b152b0 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b14e50 (9): Bad file descriptor 00:22:06.490 [2024-10-17 19:29:30.217111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.217125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:06.490 [2024-10-17 19:29:30.217134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.217147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:06.490 [2024-10-17 19:29:30.217155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.217169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:06.490 [2024-10-17 19:29:30.217177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.217191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:06.490 [2024-10-17 19:29:30.217199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:06.490 [2024-10-17 19:29:30.217216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:06.490 [2024-10-17 19:29:30.217242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.217249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.217255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.217261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.217267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.490 [2024-10-17 19:29:30.217273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:06.490 [2024-10-17 19:29:30.217279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:06.491 [2024-10-17 19:29:30.217285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:06.491 [2024-10-17 19:29:30.217293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:06.491 [2024-10-17 19:29:30.217299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:06.491 [2024-10-17 19:29:30.217305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:06.491 [2024-10-17 19:29:30.217314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:06.491 [2024-10-17 19:29:30.217320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:06.491 [2024-10-17 19:29:30.217326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:06.491 [2024-10-17 19:29:30.217349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.491 [2024-10-17 19:29:30.217356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.491 [2024-10-17 19:29:30.217361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.059 19:29:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2160713 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2160713 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2160713 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.059 rmmod nvme_tcp 00:22:08.059 rmmod nvme_fabrics 00:22:08.059 rmmod nvme_keyring 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 2160450 ']' 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 2160450 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2160450 ']' 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2160450 00:22:08.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2160450) - No such process 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2160450 is not found' 00:22:08.059 Process with pid 2160450 is not found 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.059 19:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.026 00:22:10.026 real 0m7.389s 00:22:10.026 user 0m17.571s 00:22:10.026 sys 0m1.378s 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.026 ************************************ 00:22:10.026 END TEST nvmf_shutdown_tc3 00:22:10.026 ************************************ 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.026 ************************************ 00:22:10.026 START TEST nvmf_shutdown_tc4 00:22:10.026 ************************************ 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.026 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.027 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.027 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.027 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.027 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.027 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.286 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.286 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.286 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.287 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.287 19:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:22:10.287 00:22:10.287 --- 10.0.0.2 ping statistics --- 00:22:10.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.287 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:22:10.287 00:22:10.287 --- 10.0.0.1 ping statistics --- 00:22:10.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.287 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:10.287 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=2161921 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 2161921 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 2161921 ']' 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.546 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.546 [2024-10-17 19:29:34.148379] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:10.546 [2024-10-17 19:29:34.148426] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.546 [2024-10-17 19:29:34.224281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.546 [2024-10-17 19:29:34.266030] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.546 [2024-10-17 19:29:34.266068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.546 [2024-10-17 19:29:34.266074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.546 [2024-10-17 19:29:34.266081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.546 [2024-10-17 19:29:34.266086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.546 [2024-10-17 19:29:34.267557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.546 [2024-10-17 19:29:34.267669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.546 [2024-10-17 19:29:34.267761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.546 [2024-10-17 19:29:34.267762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.805 [2024-10-17 19:29:34.404160] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.805 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:10.806 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.806 Malloc1 00:22:10.806 [2024-10-17 19:29:34.517316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.806 Malloc2 00:22:10.806 Malloc3 00:22:11.063 Malloc4 00:22:11.063 Malloc5 00:22:11.063 Malloc6 00:22:11.064 Malloc7 00:22:11.064 Malloc8 00:22:11.064 Malloc9 00:22:11.322 Malloc10 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2162039 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:11.322 19:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:11.322 [2024-10-17 19:29:35.023163] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2161921 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2161921 ']' 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2161921 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.599 19:29:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2161921 00:22:16.599 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.599 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.599 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2161921' 00:22:16.599 killing process with pid 2161921 00:22:16.599 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 2161921 00:22:16.599 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 2161921 00:22:16.599 [2024-10-17 19:29:40.022994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c42b30 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.023678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43020 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.024405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43510 is same with the state(6) to be set 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 [2024-10-17 19:29:40.024969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db8030 is same with the state(6) to be set 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 [2024-10-17 19:29:40.024994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db8030 is same with the state(6) to be set 00:22:16.599 [2024-10-17 19:29:40.025003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db8030 is same with the state(6) to be set 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 [2024-10-17 19:29:40.025195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.599 starting I/O failed: -6 00:22:16.599 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 [2024-10-17 19:29:40.026109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.026949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 [2024-10-17 19:29:40.026970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with starting I/O failed: -6 00:22:16.600 the state(6) to be set 00:22:16.600 [2024-10-17 19:29:40.026980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 [2024-10-17 19:29:40.026988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 [2024-10-17 19:29:40.026995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 [2024-10-17 19:29:40.027002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.027009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 [2024-10-17 19:29:40.027015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c43d80 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.027106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.027263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.027288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 [2024-10-17 19:29:40.027298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 [2024-10-17 19:29:40.027305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 starting I/O failed: -6 00:22:16.600 [2024-10-17 19:29:40.027313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 [2024-10-17 19:29:40.027320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44270 is same with the state(6) to be set 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.600 starting I/O failed: -6 00:22:16.600 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.027605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44760 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 [2024-10-17 19:29:40.027624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44760 is same with starting I/O failed: -6 00:22:16.601 the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.027632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44760 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.027639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44760 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 [2024-10-17 19:29:40.027645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c44760 is same with starting I/O failed: -6 00:22:16.601 the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.027929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.027952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.027961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 [2024-10-17 19:29:40.027967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.027974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.027980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 [2024-10-17 19:29:40.027988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.027996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.028002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 [2024-10-17 19:29:40.028009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c438b0 is same with starting I/O failed: -6 00:22:16.601 the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 [2024-10-17 19:29:40.028677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.601 NVMe io qpair process completion error 00:22:16.601 [2024-10-17 19:29:40.029004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8840 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.029018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8840 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.029024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8840 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.029032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8840 is same with the state(6) to be set 00:22:16.601 [2024-10-17 19:29:40.029039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8840 is same with the state(6) to be set 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 starting I/O failed: -6 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.601 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 [2024-10-17 19:29:40.029798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 [2024-10-17 19:29:40.030631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 [2024-10-17 19:29:40.031635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.602 Write completed with error (sct=0, sc=8) 00:22:16.602 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 [2024-10-17 19:29:40.033382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.603 NVMe io qpair process completion error 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 [2024-10-17 19:29:40.034414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.603 Write completed with error (sct=0, sc=8) 00:22:16.603 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 [2024-10-17 19:29:40.035291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 [2024-10-17 19:29:40.036280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.604 Write completed with error (sct=0, sc=8) 00:22:16.604 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 [2024-10-17 19:29:40.037962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.605 NVMe io qpair process completion error 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 [2024-10-17 19:29:40.039003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.605 starting I/O failed: -6 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 [2024-10-17 19:29:40.039883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.605 starting I/O failed: -6 00:22:16.605 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 [2024-10-17 19:29:40.041093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.606 starting I/O failed: -6 00:22:16.606 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 [2024-10-17 19:29:40.045165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.607 NVMe io qpair process completion error 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 [2024-10-17 19:29:40.046200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 [2024-10-17 19:29:40.047082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 Write completed with error (sct=0, sc=8) 00:22:16.607 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 [2024-10-17 19:29:40.048173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 [2024-10-17 19:29:40.052334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.608 NVMe io qpair process completion error 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 starting I/O failed: -6 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.608 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 [2024-10-17 19:29:40.053335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 [2024-10-17 19:29:40.054230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 [2024-10-17 19:29:40.055233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.609 Write completed with error (sct=0, sc=8) 00:22:16.609 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 [2024-10-17 19:29:40.057124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.610 NVMe io qpair process completion error 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 [2024-10-17 19:29:40.058146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.610 starting I/O failed: -6 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 starting I/O failed: -6 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.610 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 [2024-10-17 19:29:40.059055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 [2024-10-17 19:29:40.060117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.611 Write completed with error (sct=0, sc=8) 00:22:16.611 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 [2024-10-17 19:29:40.061918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.612 NVMe io qpair process completion error 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 [2024-10-17 19:29:40.062964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 [2024-10-17 19:29:40.063842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.612 starting I/O failed: -6 00:22:16.612 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 [2024-10-17 19:29:40.064862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 [2024-10-17 19:29:40.068649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.613 NVMe io qpair process completion error 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 Write completed with error (sct=0, sc=8) 00:22:16.613 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 [2024-10-17 19:29:40.070276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 [2024-10-17 19:29:40.071449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.614 starting I/O failed: -6 00:22:16.614 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 [2024-10-17 19:29:40.074193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.615 NVMe io qpair process completion error 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 [2024-10-17 19:29:40.075202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:16.615 starting I/O failed: -6 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 Write completed with error (sct=0, sc=8) 00:22:16.615 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 [2024-10-17 19:29:40.076069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 [2024-10-17 19:29:40.077121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 Write completed with error (sct=0, sc=8) 00:22:16.616 starting I/O failed: -6 00:22:16.616 [2024-10-17 19:29:40.079636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:16.616 NVMe io qpair process completion error 00:22:16.616 Initializing NVMe Controllers 00:22:16.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:16.616 Controller IO queue size 128, less than required. 00:22:16.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:16.616 Controller IO queue size 128, less than required. 00:22:16.616 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:16.617 Controller IO queue size 128, less than required. 00:22:16.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:16.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:16.617 Initialization complete. Launching workers. 00:22:16.617 ======================================================== 00:22:16.617 Latency(us) 00:22:16.617 Device Information : IOPS MiB/s Average min max 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2136.33 91.80 59918.73 1158.43 120879.14 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2218.34 95.32 57728.60 700.74 128701.31 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2196.22 94.37 57658.59 958.44 106766.71 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2201.23 94.58 57538.45 778.31 104874.30 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2189.54 94.08 57857.10 727.60 103134.90 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2214.38 95.15 57224.83 694.42 100089.68 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2152.61 92.49 58906.40 931.53 106390.88 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2159.29 92.78 58766.01 719.04 111341.47 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2214.58 95.16 57313.46 653.70 97526.95 00:22:16.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2219.38 95.36 57202.29 910.51 96558.60 00:22:16.617 ======================================================== 00:22:16.617 Total : 21901.91 941.10 58000.74 653.70 128701.31 00:22:16.617 00:22:16.617 [2024-10-17 19:29:40.082584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272960 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272fc0 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12749d0 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272630 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12747f0 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1274bb0 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1272c90 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273810 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1273b40 is same with the state(6) to be set 00:22:16.617 [2024-10-17 19:29:40.082871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12734e0 is same with the state(6) to be set 00:22:16.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:16.877 19:29:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:17.814 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2162039 00:22:17.814 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2162039 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2162039 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:17.815 rmmod nvme_tcp 00:22:17.815 rmmod nvme_fabrics 00:22:17.815 rmmod nvme_keyring 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 2161921 ']' 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 2161921 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 2161921 ']' 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 2161921 00:22:17.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2161921) - No such process 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 2161921 is not found' 00:22:17.815 Process with pid 2161921 is not found 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.815 19:29:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:20.352 00:22:20.352 real 0m9.792s 00:22:20.352 user 0m24.741s 00:22:20.352 sys 0m5.338s 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:20.352 ************************************ 00:22:20.352 END TEST nvmf_shutdown_tc4 00:22:20.352 ************************************ 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:20.352 00:22:20.352 real 0m40.806s 00:22:20.352 user 1m40.254s 00:22:20.352 sys 0m14.118s 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:20.352 ************************************ 00:22:20.352 END TEST nvmf_shutdown 00:22:20.352 ************************************ 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:22:20.352 00:22:20.352 real 11m45.266s 00:22:20.352 user 25m33.512s 00:22:20.352 sys 3m38.657s 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:20.352 19:29:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:20.352 ************************************ 00:22:20.352 END TEST nvmf_target_extra 00:22:20.352 ************************************ 00:22:20.352 19:29:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:20.352 19:29:43 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.352 19:29:43 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.352 19:29:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.352 ************************************ 00:22:20.352 START TEST nvmf_host 00:22:20.352 ************************************ 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:20.352 * Looking for test storage... 00:22:20.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:20.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.352 --rc genhtml_branch_coverage=1 00:22:20.352 --rc genhtml_function_coverage=1 00:22:20.352 --rc genhtml_legend=1 00:22:20.352 --rc geninfo_all_blocks=1 00:22:20.352 --rc geninfo_unexecuted_blocks=1 00:22:20.352 00:22:20.352 ' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:20.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.352 --rc genhtml_branch_coverage=1 00:22:20.352 --rc genhtml_function_coverage=1 00:22:20.352 --rc genhtml_legend=1 00:22:20.352 --rc geninfo_all_blocks=1 00:22:20.352 --rc geninfo_unexecuted_blocks=1 00:22:20.352 00:22:20.352 ' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:20.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.352 --rc genhtml_branch_coverage=1 00:22:20.352 --rc genhtml_function_coverage=1 00:22:20.352 --rc genhtml_legend=1 00:22:20.352 --rc geninfo_all_blocks=1 00:22:20.352 --rc geninfo_unexecuted_blocks=1 00:22:20.352 00:22:20.352 ' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:20.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.352 --rc genhtml_branch_coverage=1 00:22:20.352 --rc genhtml_function_coverage=1 00:22:20.352 --rc genhtml_legend=1 00:22:20.352 --rc geninfo_all_blocks=1 00:22:20.352 --rc geninfo_unexecuted_blocks=1 00:22:20.352 00:22:20.352 ' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.352 19:29:43 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.353 ************************************ 00:22:20.353 START TEST nvmf_multicontroller 00:22:20.353 ************************************ 00:22:20.353 19:29:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:20.353 * Looking for test storage... 00:22:20.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:20.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.353 --rc genhtml_branch_coverage=1 00:22:20.353 --rc genhtml_function_coverage=1 00:22:20.353 --rc genhtml_legend=1 00:22:20.353 --rc geninfo_all_blocks=1 00:22:20.353 --rc geninfo_unexecuted_blocks=1 00:22:20.353 00:22:20.353 ' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:20.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.353 --rc genhtml_branch_coverage=1 00:22:20.353 --rc genhtml_function_coverage=1 00:22:20.353 --rc genhtml_legend=1 00:22:20.353 --rc geninfo_all_blocks=1 00:22:20.353 --rc geninfo_unexecuted_blocks=1 00:22:20.353 00:22:20.353 ' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:20.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.353 --rc genhtml_branch_coverage=1 00:22:20.353 --rc genhtml_function_coverage=1 00:22:20.353 --rc genhtml_legend=1 00:22:20.353 --rc geninfo_all_blocks=1 00:22:20.353 --rc geninfo_unexecuted_blocks=1 00:22:20.353 00:22:20.353 ' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:20.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.353 --rc genhtml_branch_coverage=1 00:22:20.353 --rc genhtml_function_coverage=1 00:22:20.353 --rc genhtml_legend=1 00:22:20.353 --rc geninfo_all_blocks=1 00:22:20.353 --rc geninfo_unexecuted_blocks=1 00:22:20.353 00:22:20.353 ' 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.353 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.612 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:20.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:20.613 19:29:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.183 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:27.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:27.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:27.184 Found net devices under 0000:86:00.0: cvl_0_0 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:27.184 Found net devices under 0000:86:00.1: cvl_0_1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:27.184 19:29:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:27.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.408 ms 00:22:27.184 00:22:27.184 --- 10.0.0.2 ping statistics --- 00:22:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.184 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:22:27.184 00:22:27.184 --- 10.0.0.1 ping statistics --- 00:22:27.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.184 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=2166748 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 2166748 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2166748 ']' 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.184 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.184 [2024-10-17 19:29:50.154398] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:27.184 [2024-10-17 19:29:50.154443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.184 [2024-10-17 19:29:50.231516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:27.184 [2024-10-17 19:29:50.273520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.185 [2024-10-17 19:29:50.273553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.185 [2024-10-17 19:29:50.273560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.185 [2024-10-17 19:29:50.273566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.185 [2024-10-17 19:29:50.273571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.185 [2024-10-17 19:29:50.275027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.185 [2024-10-17 19:29:50.275131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.185 [2024-10-17 19:29:50.275132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 [2024-10-17 19:29:50.411113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 Malloc0 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 [2024-10-17 19:29:50.472739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 [2024-10-17 19:29:50.480671] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 Malloc1 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2166794 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2166794 /var/tmp/bdevperf.sock 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2166794 ']' 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.185 19:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.445 NVMe0n1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.445 1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.445 request: 00:22:27.445 { 00:22:27.445 "name": "NVMe0", 00:22:27.445 "trtype": "tcp", 00:22:27.445 "traddr": "10.0.0.2", 00:22:27.445 "adrfam": "ipv4", 00:22:27.445 "trsvcid": "4420", 00:22:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.445 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:27.445 "hostaddr": "10.0.0.1", 00:22:27.445 "prchk_reftag": false, 00:22:27.445 "prchk_guard": false, 00:22:27.445 "hdgst": false, 00:22:27.445 "ddgst": false, 00:22:27.445 "allow_unrecognized_csi": false, 00:22:27.445 "method": "bdev_nvme_attach_controller", 00:22:27.445 "req_id": 1 00:22:27.445 } 00:22:27.445 Got JSON-RPC error response 00:22:27.445 response: 00:22:27.445 { 00:22:27.445 "code": -114, 00:22:27.445 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:27.445 } 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.445 request: 00:22:27.445 { 00:22:27.445 "name": "NVMe0", 00:22:27.445 "trtype": "tcp", 00:22:27.445 "traddr": "10.0.0.2", 00:22:27.445 "adrfam": "ipv4", 00:22:27.445 "trsvcid": "4420", 00:22:27.445 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:27.445 "hostaddr": "10.0.0.1", 00:22:27.445 "prchk_reftag": false, 00:22:27.445 "prchk_guard": false, 00:22:27.445 "hdgst": false, 00:22:27.445 "ddgst": false, 00:22:27.445 "allow_unrecognized_csi": false, 00:22:27.445 "method": "bdev_nvme_attach_controller", 00:22:27.445 "req_id": 1 00:22:27.445 } 00:22:27.445 Got JSON-RPC error response 00:22:27.445 response: 00:22:27.445 { 00:22:27.445 "code": -114, 00:22:27.445 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:27.445 } 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:27.445 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.446 request: 00:22:27.446 { 00:22:27.446 "name": "NVMe0", 00:22:27.446 "trtype": "tcp", 00:22:27.446 "traddr": "10.0.0.2", 00:22:27.446 "adrfam": "ipv4", 00:22:27.446 "trsvcid": "4420", 00:22:27.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.446 "hostaddr": "10.0.0.1", 00:22:27.446 "prchk_reftag": false, 00:22:27.446 "prchk_guard": false, 00:22:27.446 "hdgst": false, 00:22:27.446 "ddgst": false, 00:22:27.446 "multipath": "disable", 00:22:27.446 "allow_unrecognized_csi": false, 00:22:27.446 "method": "bdev_nvme_attach_controller", 00:22:27.446 "req_id": 1 00:22:27.446 } 00:22:27.446 Got JSON-RPC error response 00:22:27.446 response: 00:22:27.446 { 00:22:27.446 "code": -114, 00:22:27.446 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:27.446 } 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.446 request: 00:22:27.446 { 00:22:27.446 "name": "NVMe0", 00:22:27.446 "trtype": "tcp", 00:22:27.446 "traddr": "10.0.0.2", 00:22:27.446 "adrfam": "ipv4", 00:22:27.446 "trsvcid": "4420", 00:22:27.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.446 "hostaddr": "10.0.0.1", 00:22:27.446 "prchk_reftag": false, 00:22:27.446 "prchk_guard": false, 00:22:27.446 "hdgst": false, 00:22:27.446 "ddgst": false, 00:22:27.446 "multipath": "failover", 00:22:27.446 "allow_unrecognized_csi": false, 00:22:27.446 "method": "bdev_nvme_attach_controller", 00:22:27.446 "req_id": 1 00:22:27.446 } 00:22:27.446 Got JSON-RPC error response 00:22:27.446 response: 00:22:27.446 { 00:22:27.446 "code": -114, 00:22:27.446 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:27.446 } 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.446 NVMe0n1 00:22:27.446 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.705 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.705 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:27.964 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.964 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:27.964 19:29:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:28.900 { 00:22:28.900 "results": [ 00:22:28.901 { 00:22:28.901 "job": "NVMe0n1", 00:22:28.901 "core_mask": "0x1", 00:22:28.901 "workload": "write", 00:22:28.901 "status": "finished", 00:22:28.901 "queue_depth": 128, 00:22:28.901 "io_size": 4096, 00:22:28.901 "runtime": 1.007847, 00:22:28.901 "iops": 24711.09206060047, 00:22:28.901 "mibps": 96.52770336172058, 00:22:28.901 "io_failed": 0, 00:22:28.901 "io_timeout": 0, 00:22:28.901 "avg_latency_us": 5173.7815321842045, 00:22:28.901 "min_latency_us": 3105.158095238095, 00:22:28.901 "max_latency_us": 9986.438095238096 00:22:28.901 } 00:22:28.901 ], 00:22:28.901 "core_count": 1 00:22:28.901 } 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2166794 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2166794 ']' 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2166794 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.901 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2166794 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2166794' 00:22:29.160 killing process with pid 2166794 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2166794 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2166794 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:22:29.160 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:29.160 [2024-10-17 19:29:50.586420] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:29.160 [2024-10-17 19:29:50.586465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166794 ] 00:22:29.160 [2024-10-17 19:29:50.661570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.160 [2024-10-17 19:29:50.703887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.160 [2024-10-17 19:29:51.476433] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name c95d060a-1cf1-43d8-9c58-71cecac8621b already exists 00:22:29.160 [2024-10-17 19:29:51.476461] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:c95d060a-1cf1-43d8-9c58-71cecac8621b alias for bdev NVMe1n1 00:22:29.160 [2024-10-17 19:29:51.476469] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:29.160 Running I/O for 1 seconds... 00:22:29.160 24650.00 IOPS, 96.29 MiB/s 00:22:29.160 Latency(us) 00:22:29.160 [2024-10-17T17:29:52.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.160 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:29.160 NVMe0n1 : 1.01 24711.09 96.53 0.00 0.00 5173.78 3105.16 9986.44 00:22:29.160 [2024-10-17T17:29:52.944Z] =================================================================================================================== 00:22:29.160 [2024-10-17T17:29:52.944Z] Total : 24711.09 96.53 0.00 0.00 5173.78 3105.16 9986.44 00:22:29.160 Received shutdown signal, test time was about 1.000000 seconds 00:22:29.160 00:22:29.160 Latency(us) 00:22:29.160 [2024-10-17T17:29:52.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.160 [2024-10-17T17:29:52.944Z] =================================================================================================================== 00:22:29.160 [2024-10-17T17:29:52.944Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.160 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.160 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.160 rmmod nvme_tcp 00:22:29.160 rmmod nvme_fabrics 00:22:29.160 rmmod nvme_keyring 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 2166748 ']' 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 2166748 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2166748 ']' 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2166748 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:29.419 19:29:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2166748 00:22:29.419 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:29.419 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:29.419 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2166748' 00:22:29.419 killing process with pid 2166748 00:22:29.419 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2166748 00:22:29.419 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2166748 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.678 19:29:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:31.582 00:22:31.582 real 0m11.331s 00:22:31.582 user 0m12.864s 00:22:31.582 sys 0m5.238s 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:31.582 ************************************ 00:22:31.582 END TEST nvmf_multicontroller 00:22:31.582 ************************************ 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:31.582 ************************************ 00:22:31.582 START TEST nvmf_aer 00:22:31.582 ************************************ 00:22:31.582 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:31.842 * Looking for test storage... 00:22:31.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:31.842 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.842 --rc genhtml_branch_coverage=1 00:22:31.842 --rc genhtml_function_coverage=1 00:22:31.842 --rc genhtml_legend=1 00:22:31.842 --rc geninfo_all_blocks=1 00:22:31.843 --rc geninfo_unexecuted_blocks=1 00:22:31.843 00:22:31.843 ' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:31.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.843 --rc genhtml_branch_coverage=1 00:22:31.843 --rc genhtml_function_coverage=1 00:22:31.843 --rc genhtml_legend=1 00:22:31.843 --rc geninfo_all_blocks=1 00:22:31.843 --rc geninfo_unexecuted_blocks=1 00:22:31.843 00:22:31.843 ' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:31.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.843 --rc genhtml_branch_coverage=1 00:22:31.843 --rc genhtml_function_coverage=1 00:22:31.843 --rc genhtml_legend=1 00:22:31.843 --rc geninfo_all_blocks=1 00:22:31.843 --rc geninfo_unexecuted_blocks=1 00:22:31.843 00:22:31.843 ' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:31.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:31.843 --rc genhtml_branch_coverage=1 00:22:31.843 --rc genhtml_function_coverage=1 00:22:31.843 --rc genhtml_legend=1 00:22:31.843 --rc geninfo_all_blocks=1 00:22:31.843 --rc geninfo_unexecuted_blocks=1 00:22:31.843 00:22:31.843 ' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:31.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:31.843 19:29:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:38.413 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:38.413 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:38.413 Found net devices under 0000:86:00.0: cvl_0_0 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:38.413 Found net devices under 0000:86:00.1: cvl_0_1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:38.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:22:38.413 00:22:38.413 --- 10.0.0.2 ping statistics --- 00:22:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.413 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:38.413 00:22:38.413 --- 10.0.0.1 ping statistics --- 00:22:38.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.413 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:22:38.413 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=2170783 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 2170783 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2170783 ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 [2024-10-17 19:30:01.535212] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:38.414 [2024-10-17 19:30:01.535257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.414 [2024-10-17 19:30:01.615514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.414 [2024-10-17 19:30:01.657966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.414 [2024-10-17 19:30:01.658005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.414 [2024-10-17 19:30:01.658012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.414 [2024-10-17 19:30:01.658018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.414 [2024-10-17 19:30:01.658024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.414 [2024-10-17 19:30:01.659450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.414 [2024-10-17 19:30:01.659559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.414 [2024-10-17 19:30:01.659588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.414 [2024-10-17 19:30:01.659590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 [2024-10-17 19:30:01.796119] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 Malloc0 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 [2024-10-17 19:30:01.858153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.414 [ 00:22:38.414 { 00:22:38.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.414 "subtype": "Discovery", 00:22:38.414 "listen_addresses": [], 00:22:38.414 "allow_any_host": true, 00:22:38.414 "hosts": [] 00:22:38.414 }, 00:22:38.414 { 00:22:38.414 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.414 "subtype": "NVMe", 00:22:38.414 "listen_addresses": [ 00:22:38.414 { 00:22:38.414 "trtype": "TCP", 00:22:38.414 "adrfam": "IPv4", 00:22:38.414 "traddr": "10.0.0.2", 00:22:38.414 "trsvcid": "4420" 00:22:38.414 } 00:22:38.414 ], 00:22:38.414 "allow_any_host": true, 00:22:38.414 "hosts": [], 00:22:38.414 "serial_number": "SPDK00000000000001", 00:22:38.414 "model_number": "SPDK bdev Controller", 00:22:38.414 "max_namespaces": 2, 00:22:38.414 "min_cntlid": 1, 00:22:38.414 "max_cntlid": 65519, 00:22:38.414 "namespaces": [ 00:22:38.414 { 00:22:38.414 "nsid": 1, 00:22:38.414 "bdev_name": "Malloc0", 00:22:38.414 "name": "Malloc0", 00:22:38.414 "nguid": "9DFBED7DC55F4CF0B097757F37CF10F7", 00:22:38.414 "uuid": "9dfbed7d-c55f-4cf0-b097-757f37cf10f7" 00:22:38.414 } 00:22:38.414 ] 00:22:38.414 } 00:22:38.414 ] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2170921 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:38.414 19:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:38.414 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.414 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:22:38.414 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:22:38.414 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:38.414 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.674 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:38.674 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:38.674 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:38.674 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 Malloc1 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 [ 00:22:38.675 { 00:22:38.675 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:38.675 "subtype": "Discovery", 00:22:38.675 "listen_addresses": [], 00:22:38.675 "allow_any_host": true, 00:22:38.675 "hosts": [] 00:22:38.675 }, 00:22:38.675 { 00:22:38.675 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.675 "subtype": "NVMe", 00:22:38.675 "listen_addresses": [ 00:22:38.675 { 00:22:38.675 "trtype": "TCP", 00:22:38.675 "adrfam": "IPv4", 00:22:38.675 "traddr": "10.0.0.2", 00:22:38.675 "trsvcid": "4420" 00:22:38.675 } 00:22:38.675 ], 00:22:38.675 "allow_any_host": true, 00:22:38.675 "hosts": [], 00:22:38.675 "serial_number": "SPDK00000000000001", 00:22:38.675 "model_number": "SPDK bdev Controller", 00:22:38.675 "max_namespaces": 2, 00:22:38.675 "min_cntlid": 1, 00:22:38.675 "max_cntlid": 65519, 00:22:38.675 "namespaces": [ 00:22:38.675 { 00:22:38.675 "nsid": 1, 00:22:38.675 "bdev_name": "Malloc0", 00:22:38.675 "name": "Malloc0", 00:22:38.675 Asynchronous Event Request test 00:22:38.675 Attaching to 10.0.0.2 00:22:38.675 Attached to 10.0.0.2 00:22:38.675 Registering asynchronous event callbacks... 00:22:38.675 Starting namespace attribute notice tests for all controllers... 00:22:38.675 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:38.675 aer_cb - Changed Namespace 00:22:38.675 Cleaning up... 00:22:38.675 "nguid": "9DFBED7DC55F4CF0B097757F37CF10F7", 00:22:38.675 "uuid": "9dfbed7d-c55f-4cf0-b097-757f37cf10f7" 00:22:38.675 }, 00:22:38.675 { 00:22:38.675 "nsid": 2, 00:22:38.675 "bdev_name": "Malloc1", 00:22:38.675 "name": "Malloc1", 00:22:38.675 "nguid": "F2E2A112A6B3485189D13642D7155B70", 00:22:38.675 "uuid": "f2e2a112-a6b3-4851-89d1-3642d7155b70" 00:22:38.675 } 00:22:38.675 ] 00:22:38.675 } 00:22:38.675 ] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2170921 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:38.675 rmmod nvme_tcp 00:22:38.675 rmmod nvme_fabrics 00:22:38.675 rmmod nvme_keyring 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 2170783 ']' 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 2170783 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2170783 ']' 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2170783 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2170783 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2170783' 00:22:38.675 killing process with pid 2170783 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2170783 00:22:38.675 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2170783 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.934 19:30:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:41.471 00:22:41.471 real 0m9.332s 00:22:41.471 user 0m5.464s 00:22:41.471 sys 0m4.884s 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:41.471 ************************************ 00:22:41.471 END TEST nvmf_aer 00:22:41.471 ************************************ 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:41.471 ************************************ 00:22:41.471 START TEST nvmf_async_init 00:22:41.471 ************************************ 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:41.471 * Looking for test storage... 00:22:41.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:41.471 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:41.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.472 --rc genhtml_branch_coverage=1 00:22:41.472 --rc genhtml_function_coverage=1 00:22:41.472 --rc genhtml_legend=1 00:22:41.472 --rc geninfo_all_blocks=1 00:22:41.472 --rc geninfo_unexecuted_blocks=1 00:22:41.472 00:22:41.472 ' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:41.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.472 --rc genhtml_branch_coverage=1 00:22:41.472 --rc genhtml_function_coverage=1 00:22:41.472 --rc genhtml_legend=1 00:22:41.472 --rc geninfo_all_blocks=1 00:22:41.472 --rc geninfo_unexecuted_blocks=1 00:22:41.472 00:22:41.472 ' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:41.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.472 --rc genhtml_branch_coverage=1 00:22:41.472 --rc genhtml_function_coverage=1 00:22:41.472 --rc genhtml_legend=1 00:22:41.472 --rc geninfo_all_blocks=1 00:22:41.472 --rc geninfo_unexecuted_blocks=1 00:22:41.472 00:22:41.472 ' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:41.472 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.472 --rc genhtml_branch_coverage=1 00:22:41.472 --rc genhtml_function_coverage=1 00:22:41.472 --rc genhtml_legend=1 00:22:41.472 --rc geninfo_all_blocks=1 00:22:41.472 --rc geninfo_unexecuted_blocks=1 00:22:41.472 00:22:41.472 ' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:41.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=feedf263eeda46d6a6e14bf14c4c4e8b 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:41.472 19:30:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.044 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.044 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.044 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.044 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.044 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:22:48.045 00:22:48.045 --- 10.0.0.2 ping statistics --- 00:22:48.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.045 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:48.045 00:22:48.045 --- 10.0.0.1 ping statistics --- 00:22:48.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.045 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=2174860 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 2174860 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2174860 ']' 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:48.045 19:30:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 [2024-10-17 19:30:10.962926] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:48.045 [2024-10-17 19:30:10.962970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.045 [2024-10-17 19:30:11.042284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.045 [2024-10-17 19:30:11.082853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.045 [2024-10-17 19:30:11.082888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.045 [2024-10-17 19:30:11.082895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.045 [2024-10-17 19:30:11.082901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.045 [2024-10-17 19:30:11.082906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.045 [2024-10-17 19:30:11.083458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 [2024-10-17 19:30:11.217020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 null0 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g feedf263eeda46d6a6e14bf14c4c4e8b 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 [2024-10-17 19:30:11.265275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 nvme0n1 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.045 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.045 [ 00:22:48.045 { 00:22:48.045 "name": "nvme0n1", 00:22:48.045 "aliases": [ 00:22:48.045 "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b" 00:22:48.045 ], 00:22:48.045 "product_name": "NVMe disk", 00:22:48.045 "block_size": 512, 00:22:48.045 "num_blocks": 2097152, 00:22:48.045 "uuid": "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b", 00:22:48.045 "numa_id": 1, 00:22:48.045 "assigned_rate_limits": { 00:22:48.045 "rw_ios_per_sec": 0, 00:22:48.045 "rw_mbytes_per_sec": 0, 00:22:48.045 "r_mbytes_per_sec": 0, 00:22:48.045 "w_mbytes_per_sec": 0 00:22:48.045 }, 00:22:48.045 "claimed": false, 00:22:48.045 "zoned": false, 00:22:48.045 "supported_io_types": { 00:22:48.045 "read": true, 00:22:48.045 "write": true, 00:22:48.045 "unmap": false, 00:22:48.045 "flush": true, 00:22:48.045 "reset": true, 00:22:48.045 "nvme_admin": true, 00:22:48.045 "nvme_io": true, 00:22:48.045 "nvme_io_md": false, 00:22:48.045 "write_zeroes": true, 00:22:48.045 "zcopy": false, 00:22:48.045 "get_zone_info": false, 00:22:48.045 "zone_management": false, 00:22:48.045 "zone_append": false, 00:22:48.045 "compare": true, 00:22:48.045 "compare_and_write": true, 00:22:48.045 "abort": true, 00:22:48.045 "seek_hole": false, 00:22:48.045 "seek_data": false, 00:22:48.045 "copy": true, 00:22:48.045 "nvme_iov_md": false 00:22:48.045 }, 00:22:48.045 "memory_domains": [ 00:22:48.045 { 00:22:48.045 "dma_device_id": "system", 00:22:48.045 "dma_device_type": 1 00:22:48.045 } 00:22:48.045 ], 00:22:48.045 "driver_specific": { 00:22:48.045 "nvme": [ 00:22:48.045 { 00:22:48.045 "trid": { 00:22:48.045 "trtype": "TCP", 00:22:48.045 "adrfam": "IPv4", 00:22:48.045 "traddr": "10.0.0.2", 00:22:48.045 "trsvcid": "4420", 00:22:48.045 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:48.045 }, 00:22:48.045 "ctrlr_data": { 00:22:48.045 "cntlid": 1, 00:22:48.045 "vendor_id": "0x8086", 00:22:48.045 "model_number": "SPDK bdev Controller", 00:22:48.045 "serial_number": "00000000000000000000", 00:22:48.045 "firmware_revision": "25.01", 00:22:48.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:48.045 "oacs": { 00:22:48.045 "security": 0, 00:22:48.045 "format": 0, 00:22:48.045 "firmware": 0, 00:22:48.045 "ns_manage": 0 00:22:48.045 }, 00:22:48.045 "multi_ctrlr": true, 00:22:48.045 "ana_reporting": false 00:22:48.045 }, 00:22:48.045 "vs": { 00:22:48.045 "nvme_version": "1.3" 00:22:48.045 }, 00:22:48.045 "ns_data": { 00:22:48.045 "id": 1, 00:22:48.045 "can_share": true 00:22:48.045 } 00:22:48.046 } 00:22:48.046 ], 00:22:48.046 "mp_policy": "active_passive" 00:22:48.046 } 00:22:48.046 } 00:22:48.046 ] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 [2024-10-17 19:30:11.529801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:48.046 [2024-10-17 19:30:11.529858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be2060 (9): Bad file descriptor 00:22:48.046 [2024-10-17 19:30:11.661682] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 [ 00:22:48.046 { 00:22:48.046 "name": "nvme0n1", 00:22:48.046 "aliases": [ 00:22:48.046 "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b" 00:22:48.046 ], 00:22:48.046 "product_name": "NVMe disk", 00:22:48.046 "block_size": 512, 00:22:48.046 "num_blocks": 2097152, 00:22:48.046 "uuid": "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b", 00:22:48.046 "numa_id": 1, 00:22:48.046 "assigned_rate_limits": { 00:22:48.046 "rw_ios_per_sec": 0, 00:22:48.046 "rw_mbytes_per_sec": 0, 00:22:48.046 "r_mbytes_per_sec": 0, 00:22:48.046 "w_mbytes_per_sec": 0 00:22:48.046 }, 00:22:48.046 "claimed": false, 00:22:48.046 "zoned": false, 00:22:48.046 "supported_io_types": { 00:22:48.046 "read": true, 00:22:48.046 "write": true, 00:22:48.046 "unmap": false, 00:22:48.046 "flush": true, 00:22:48.046 "reset": true, 00:22:48.046 "nvme_admin": true, 00:22:48.046 "nvme_io": true, 00:22:48.046 "nvme_io_md": false, 00:22:48.046 "write_zeroes": true, 00:22:48.046 "zcopy": false, 00:22:48.046 "get_zone_info": false, 00:22:48.046 "zone_management": false, 00:22:48.046 "zone_append": false, 00:22:48.046 "compare": true, 00:22:48.046 "compare_and_write": true, 00:22:48.046 "abort": true, 00:22:48.046 "seek_hole": false, 00:22:48.046 "seek_data": false, 00:22:48.046 "copy": true, 00:22:48.046 "nvme_iov_md": false 00:22:48.046 }, 00:22:48.046 "memory_domains": [ 00:22:48.046 { 00:22:48.046 "dma_device_id": "system", 00:22:48.046 "dma_device_type": 1 00:22:48.046 } 00:22:48.046 ], 00:22:48.046 "driver_specific": { 00:22:48.046 "nvme": [ 00:22:48.046 { 00:22:48.046 "trid": { 00:22:48.046 "trtype": "TCP", 00:22:48.046 "adrfam": "IPv4", 00:22:48.046 "traddr": "10.0.0.2", 00:22:48.046 "trsvcid": "4420", 00:22:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:48.046 }, 00:22:48.046 "ctrlr_data": { 00:22:48.046 "cntlid": 2, 00:22:48.046 "vendor_id": "0x8086", 00:22:48.046 "model_number": "SPDK bdev Controller", 00:22:48.046 "serial_number": "00000000000000000000", 00:22:48.046 "firmware_revision": "25.01", 00:22:48.046 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:48.046 "oacs": { 00:22:48.046 "security": 0, 00:22:48.046 "format": 0, 00:22:48.046 "firmware": 0, 00:22:48.046 "ns_manage": 0 00:22:48.046 }, 00:22:48.046 "multi_ctrlr": true, 00:22:48.046 "ana_reporting": false 00:22:48.046 }, 00:22:48.046 "vs": { 00:22:48.046 "nvme_version": "1.3" 00:22:48.046 }, 00:22:48.046 "ns_data": { 00:22:48.046 "id": 1, 00:22:48.046 "can_share": true 00:22:48.046 } 00:22:48.046 } 00:22:48.046 ], 00:22:48.046 "mp_policy": "active_passive" 00:22:48.046 } 00:22:48.046 } 00:22:48.046 ] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KUqIKWxFHt 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KUqIKWxFHt 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.KUqIKWxFHt 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 [2024-10-17 19:30:11.730409] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.046 [2024-10-17 19:30:11.730506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.046 [2024-10-17 19:30:11.754485] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.046 nvme0n1 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.046 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 [ 00:22:48.306 { 00:22:48.306 "name": "nvme0n1", 00:22:48.306 "aliases": [ 00:22:48.306 "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b" 00:22:48.306 ], 00:22:48.306 "product_name": "NVMe disk", 00:22:48.306 "block_size": 512, 00:22:48.306 "num_blocks": 2097152, 00:22:48.306 "uuid": "feedf263-eeda-46d6-a6e1-4bf14c4c4e8b", 00:22:48.306 "numa_id": 1, 00:22:48.306 "assigned_rate_limits": { 00:22:48.306 "rw_ios_per_sec": 0, 00:22:48.306 "rw_mbytes_per_sec": 0, 00:22:48.306 "r_mbytes_per_sec": 0, 00:22:48.306 "w_mbytes_per_sec": 0 00:22:48.306 }, 00:22:48.306 "claimed": false, 00:22:48.306 "zoned": false, 00:22:48.306 "supported_io_types": { 00:22:48.306 "read": true, 00:22:48.306 "write": true, 00:22:48.306 "unmap": false, 00:22:48.306 "flush": true, 00:22:48.306 "reset": true, 00:22:48.306 "nvme_admin": true, 00:22:48.306 "nvme_io": true, 00:22:48.306 "nvme_io_md": false, 00:22:48.306 "write_zeroes": true, 00:22:48.306 "zcopy": false, 00:22:48.306 "get_zone_info": false, 00:22:48.306 "zone_management": false, 00:22:48.306 "zone_append": false, 00:22:48.306 "compare": true, 00:22:48.306 "compare_and_write": true, 00:22:48.306 "abort": true, 00:22:48.306 "seek_hole": false, 00:22:48.306 "seek_data": false, 00:22:48.306 "copy": true, 00:22:48.306 "nvme_iov_md": false 00:22:48.306 }, 00:22:48.306 "memory_domains": [ 00:22:48.306 { 00:22:48.306 "dma_device_id": "system", 00:22:48.306 "dma_device_type": 1 00:22:48.306 } 00:22:48.306 ], 00:22:48.306 "driver_specific": { 00:22:48.306 "nvme": [ 00:22:48.306 { 00:22:48.306 "trid": { 00:22:48.306 "trtype": "TCP", 00:22:48.306 "adrfam": "IPv4", 00:22:48.306 "traddr": "10.0.0.2", 00:22:48.306 "trsvcid": "4421", 00:22:48.306 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:48.306 }, 00:22:48.306 "ctrlr_data": { 00:22:48.306 "cntlid": 3, 00:22:48.306 "vendor_id": "0x8086", 00:22:48.306 "model_number": "SPDK bdev Controller", 00:22:48.306 "serial_number": "00000000000000000000", 00:22:48.306 "firmware_revision": "25.01", 00:22:48.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:48.306 "oacs": { 00:22:48.306 "security": 0, 00:22:48.306 "format": 0, 00:22:48.306 "firmware": 0, 00:22:48.306 "ns_manage": 0 00:22:48.306 }, 00:22:48.306 "multi_ctrlr": true, 00:22:48.306 "ana_reporting": false 00:22:48.306 }, 00:22:48.306 "vs": { 00:22:48.306 "nvme_version": "1.3" 00:22:48.306 }, 00:22:48.306 "ns_data": { 00:22:48.306 "id": 1, 00:22:48.306 "can_share": true 00:22:48.306 } 00:22:48.306 } 00:22:48.306 ], 00:22:48.306 "mp_policy": "active_passive" 00:22:48.306 } 00:22:48.306 } 00:22:48.306 ] 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.KUqIKWxFHt 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:48.306 rmmod nvme_tcp 00:22:48.306 rmmod nvme_fabrics 00:22:48.306 rmmod nvme_keyring 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 2174860 ']' 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 2174860 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2174860 ']' 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2174860 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2174860 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2174860' 00:22:48.306 killing process with pid 2174860 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2174860 00:22:48.306 19:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2174860 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.565 19:30:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:50.471 00:22:50.471 real 0m9.446s 00:22:50.471 user 0m3.025s 00:22:50.471 sys 0m4.840s 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:50.471 ************************************ 00:22:50.471 END TEST nvmf_async_init 00:22:50.471 ************************************ 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.471 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.731 ************************************ 00:22:50.731 START TEST dma 00:22:50.731 ************************************ 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:50.731 * Looking for test storage... 00:22:50.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:50.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.731 --rc genhtml_branch_coverage=1 00:22:50.731 --rc genhtml_function_coverage=1 00:22:50.731 --rc genhtml_legend=1 00:22:50.731 --rc geninfo_all_blocks=1 00:22:50.731 --rc geninfo_unexecuted_blocks=1 00:22:50.731 00:22:50.731 ' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:50.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.731 --rc genhtml_branch_coverage=1 00:22:50.731 --rc genhtml_function_coverage=1 00:22:50.731 --rc genhtml_legend=1 00:22:50.731 --rc geninfo_all_blocks=1 00:22:50.731 --rc geninfo_unexecuted_blocks=1 00:22:50.731 00:22:50.731 ' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:50.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.731 --rc genhtml_branch_coverage=1 00:22:50.731 --rc genhtml_function_coverage=1 00:22:50.731 --rc genhtml_legend=1 00:22:50.731 --rc geninfo_all_blocks=1 00:22:50.731 --rc geninfo_unexecuted_blocks=1 00:22:50.731 00:22:50.731 ' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:50.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.731 --rc genhtml_branch_coverage=1 00:22:50.731 --rc genhtml_function_coverage=1 00:22:50.731 --rc genhtml_legend=1 00:22:50.731 --rc geninfo_all_blocks=1 00:22:50.731 --rc geninfo_unexecuted_blocks=1 00:22:50.731 00:22:50.731 ' 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.731 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:50.732 00:22:50.732 real 0m0.209s 00:22:50.732 user 0m0.128s 00:22:50.732 sys 0m0.096s 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:50.732 ************************************ 00:22:50.732 END TEST dma 00:22:50.732 ************************************ 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.732 19:30:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.992 ************************************ 00:22:50.992 START TEST nvmf_identify 00:22:50.992 ************************************ 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:50.992 * Looking for test storage... 00:22:50.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:50.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.992 --rc genhtml_branch_coverage=1 00:22:50.992 --rc genhtml_function_coverage=1 00:22:50.992 --rc genhtml_legend=1 00:22:50.992 --rc geninfo_all_blocks=1 00:22:50.992 --rc geninfo_unexecuted_blocks=1 00:22:50.992 00:22:50.992 ' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:50.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.992 --rc genhtml_branch_coverage=1 00:22:50.992 --rc genhtml_function_coverage=1 00:22:50.992 --rc genhtml_legend=1 00:22:50.992 --rc geninfo_all_blocks=1 00:22:50.992 --rc geninfo_unexecuted_blocks=1 00:22:50.992 00:22:50.992 ' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:50.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.992 --rc genhtml_branch_coverage=1 00:22:50.992 --rc genhtml_function_coverage=1 00:22:50.992 --rc genhtml_legend=1 00:22:50.992 --rc geninfo_all_blocks=1 00:22:50.992 --rc geninfo_unexecuted_blocks=1 00:22:50.992 00:22:50.992 ' 00:22:50.992 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:50.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:50.992 --rc genhtml_branch_coverage=1 00:22:50.992 --rc genhtml_function_coverage=1 00:22:50.992 --rc genhtml_legend=1 00:22:50.992 --rc geninfo_all_blocks=1 00:22:50.992 --rc geninfo_unexecuted_blocks=1 00:22:50.992 00:22:50.992 ' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:50.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:50.993 19:30:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.566 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.566 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.566 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:22:57.567 00:22:57.567 --- 10.0.0.2 ping statistics --- 00:22:57.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.567 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:22:57.567 00:22:57.567 --- 10.0.0.1 ping statistics --- 00:22:57.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.567 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2178679 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2178679 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2178679 ']' 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.567 19:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:57.567 [2024-10-17 19:30:20.736533] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:57.567 [2024-10-17 19:30:20.736574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.567 [2024-10-17 19:30:20.814014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.567 [2024-10-17 19:30:20.857796] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.567 [2024-10-17 19:30:20.857830] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.567 [2024-10-17 19:30:20.857838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.567 [2024-10-17 19:30:20.857845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.567 [2024-10-17 19:30:20.857850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.567 [2024-10-17 19:30:20.859171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.567 [2024-10-17 19:30:20.859207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.567 [2024-10-17 19:30:20.859316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.567 [2024-10-17 19:30:20.859317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.826 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.826 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:57.826 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:57.827 [2024-10-17 19:30:21.589176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:57.827 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 Malloc0 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 [2024-10-17 19:30:21.691868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.087 [ 00:22:58.087 { 00:22:58.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:58.087 "subtype": "Discovery", 00:22:58.087 "listen_addresses": [ 00:22:58.087 { 00:22:58.087 "trtype": "TCP", 00:22:58.087 "adrfam": "IPv4", 00:22:58.087 "traddr": "10.0.0.2", 00:22:58.087 "trsvcid": "4420" 00:22:58.087 } 00:22:58.087 ], 00:22:58.087 "allow_any_host": true, 00:22:58.087 "hosts": [] 00:22:58.087 }, 00:22:58.087 { 00:22:58.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.087 "subtype": "NVMe", 00:22:58.087 "listen_addresses": [ 00:22:58.087 { 00:22:58.087 "trtype": "TCP", 00:22:58.087 "adrfam": "IPv4", 00:22:58.087 "traddr": "10.0.0.2", 00:22:58.087 "trsvcid": "4420" 00:22:58.087 } 00:22:58.087 ], 00:22:58.087 "allow_any_host": true, 00:22:58.087 "hosts": [], 00:22:58.087 "serial_number": "SPDK00000000000001", 00:22:58.087 "model_number": "SPDK bdev Controller", 00:22:58.087 "max_namespaces": 32, 00:22:58.087 "min_cntlid": 1, 00:22:58.087 "max_cntlid": 65519, 00:22:58.087 "namespaces": [ 00:22:58.087 { 00:22:58.087 "nsid": 1, 00:22:58.087 "bdev_name": "Malloc0", 00:22:58.087 "name": "Malloc0", 00:22:58.087 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:58.087 "eui64": "ABCDEF0123456789", 00:22:58.087 "uuid": "a1a1af50-ac53-4e5e-a19a-5b811c2f8384" 00:22:58.087 } 00:22:58.087 ] 00:22:58.087 } 00:22:58.087 ] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.087 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:58.087 [2024-10-17 19:30:21.744141] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:58.087 [2024-10-17 19:30:21.744185] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178891 ] 00:22:58.087 [2024-10-17 19:30:21.790349] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:58.088 [2024-10-17 19:30:21.790405] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:58.088 [2024-10-17 19:30:21.790410] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:58.088 [2024-10-17 19:30:21.790421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:58.088 [2024-10-17 19:30:21.790428] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:58.088 [2024-10-17 19:30:21.790981] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:58.088 [2024-10-17 19:30:21.791013] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b13760 0 00:22:58.088 [2024-10-17 19:30:21.797614] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:58.088 [2024-10-17 19:30:21.797628] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:58.088 [2024-10-17 19:30:21.797633] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:58.088 [2024-10-17 19:30:21.797636] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:58.088 [2024-10-17 19:30:21.797668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.797674] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.797677] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.797692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:58.088 [2024-10-17 19:30:21.797709] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.804609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.804618] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.804621] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804625] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.804637] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:58.088 [2024-10-17 19:30:21.804643] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:58.088 [2024-10-17 19:30:21.804649] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:58.088 [2024-10-17 19:30:21.804661] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804665] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.804675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.804687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.804858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.804864] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.804867] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804870] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.804875] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:58.088 [2024-10-17 19:30:21.804881] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:58.088 [2024-10-17 19:30:21.804888] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804891] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.804900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.804909] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.804968] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.804974] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.804977] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.804980] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.804984] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:58.088 [2024-10-17 19:30:21.804991] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.804997] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.805011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.805021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.805087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.805093] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.805096] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.805103] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.805111] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805115] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805118] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.805123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.805132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.805193] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.805199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.805201] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805205] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.805209] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:58.088 [2024-10-17 19:30:21.805213] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.805219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.805324] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:58.088 [2024-10-17 19:30:21.805328] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.805336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805339] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805342] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.805348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.805357] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.805416] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.805422] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.805425] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805428] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.805432] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:58.088 [2024-10-17 19:30:21.805440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805443] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.805454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.088 [2024-10-17 19:30:21.805463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.088 [2024-10-17 19:30:21.805524] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.088 [2024-10-17 19:30:21.805530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.088 [2024-10-17 19:30:21.805532] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.088 [2024-10-17 19:30:21.805539] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:58.088 [2024-10-17 19:30:21.805543] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:58.088 [2024-10-17 19:30:21.805550] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:58.088 [2024-10-17 19:30:21.805561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:58.088 [2024-10-17 19:30:21.805569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.088 [2024-10-17 19:30:21.805573] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.088 [2024-10-17 19:30:21.805578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.089 [2024-10-17 19:30:21.805588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.089 [2024-10-17 19:30:21.805714] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.089 [2024-10-17 19:30:21.805720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.089 [2024-10-17 19:30:21.805723] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b13760): datao=0, datal=4096, cccid=0 00:22:58.089 [2024-10-17 19:30:21.805731] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b751c0) on tqpair(0x1b13760): expected_datao=0, payload_size=4096 00:22:58.089 [2024-10-17 19:30:21.805735] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805742] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805746] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805763] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.089 [2024-10-17 19:30:21.805769] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.089 [2024-10-17 19:30:21.805772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805775] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.089 [2024-10-17 19:30:21.805784] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:58.089 [2024-10-17 19:30:21.805789] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:58.089 [2024-10-17 19:30:21.805792] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:58.089 [2024-10-17 19:30:21.805797] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:58.089 [2024-10-17 19:30:21.805801] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:58.089 [2024-10-17 19:30:21.805809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:58.089 [2024-10-17 19:30:21.805818] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:58.089 [2024-10-17 19:30:21.805825] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805828] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805831] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.805837] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:58.089 [2024-10-17 19:30:21.805848] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.089 [2024-10-17 19:30:21.805914] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.089 [2024-10-17 19:30:21.805920] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.089 [2024-10-17 19:30:21.805922] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.089 [2024-10-17 19:30:21.805932] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805936] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805939] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.805944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.089 [2024-10-17 19:30:21.805949] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805952] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805955] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.805960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.089 [2024-10-17 19:30:21.805965] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805968] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.805976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.089 [2024-10-17 19:30:21.805981] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.805987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.805992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.089 [2024-10-17 19:30:21.805996] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:58.089 [2024-10-17 19:30:21.806006] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:58.089 [2024-10-17 19:30:21.806012] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806016] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.806021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.089 [2024-10-17 19:30:21.806033] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b751c0, cid 0, qid 0 00:22:58.089 [2024-10-17 19:30:21.806039] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75340, cid 1, qid 0 00:22:58.089 [2024-10-17 19:30:21.806043] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b754c0, cid 2, qid 0 00:22:58.089 [2024-10-17 19:30:21.806047] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.089 [2024-10-17 19:30:21.806051] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b757c0, cid 4, qid 0 00:22:58.089 [2024-10-17 19:30:21.806146] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.089 [2024-10-17 19:30:21.806152] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.089 [2024-10-17 19:30:21.806155] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806158] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b757c0) on tqpair=0x1b13760 00:22:58.089 [2024-10-17 19:30:21.806163] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:58.089 [2024-10-17 19:30:21.806167] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:58.089 [2024-10-17 19:30:21.806176] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806180] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.806185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.089 [2024-10-17 19:30:21.806195] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b757c0, cid 4, qid 0 00:22:58.089 [2024-10-17 19:30:21.806265] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.089 [2024-10-17 19:30:21.806270] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.089 [2024-10-17 19:30:21.806273] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806276] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b13760): datao=0, datal=4096, cccid=4 00:22:58.089 [2024-10-17 19:30:21.806280] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b757c0) on tqpair(0x1b13760): expected_datao=0, payload_size=4096 00:22:58.089 [2024-10-17 19:30:21.806284] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806294] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806297] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.089 [2024-10-17 19:30:21.806347] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.089 [2024-10-17 19:30:21.806349] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806352] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b757c0) on tqpair=0x1b13760 00:22:58.089 [2024-10-17 19:30:21.806364] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:58.089 [2024-10-17 19:30:21.806386] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806390] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.806396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.089 [2024-10-17 19:30:21.806402] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806405] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.089 [2024-10-17 19:30:21.806408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b13760) 00:22:58.089 [2024-10-17 19:30:21.806413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.089 [2024-10-17 19:30:21.806426] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b757c0, cid 4, qid 0 00:22:58.089 [2024-10-17 19:30:21.806431] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75940, cid 5, qid 0 00:22:58.089 [2024-10-17 19:30:21.806532] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.089 [2024-10-17 19:30:21.806537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.089 [2024-10-17 19:30:21.806540] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.806543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b13760): datao=0, datal=1024, cccid=4 00:22:58.090 [2024-10-17 19:30:21.806547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b757c0) on tqpair(0x1b13760): expected_datao=0, payload_size=1024 00:22:58.090 [2024-10-17 19:30:21.806551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.806556] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.806559] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.806564] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.090 [2024-10-17 19:30:21.806569] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.090 [2024-10-17 19:30:21.806572] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.806575] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75940) on tqpair=0x1b13760 00:22:58.090 [2024-10-17 19:30:21.847740] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.090 [2024-10-17 19:30:21.847752] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.090 [2024-10-17 19:30:21.847756] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.847759] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b757c0) on tqpair=0x1b13760 00:22:58.090 [2024-10-17 19:30:21.847771] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.847775] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b13760) 00:22:58.090 [2024-10-17 19:30:21.847782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.090 [2024-10-17 19:30:21.847799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b757c0, cid 4, qid 0 00:22:58.090 [2024-10-17 19:30:21.847873] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.090 [2024-10-17 19:30:21.847879] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.090 [2024-10-17 19:30:21.847883] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.847886] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b13760): datao=0, datal=3072, cccid=4 00:22:58.090 [2024-10-17 19:30:21.847890] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b757c0) on tqpair(0x1b13760): expected_datao=0, payload_size=3072 00:22:58.090 [2024-10-17 19:30:21.847893] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.847906] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.090 [2024-10-17 19:30:21.847910] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.357 [2024-10-17 19:30:21.892629] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.357 [2024-10-17 19:30:21.892633] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892636] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b757c0) on tqpair=0x1b13760 00:22:58.357 [2024-10-17 19:30:21.892647] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892651] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b13760) 00:22:58.357 [2024-10-17 19:30:21.892658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.357 [2024-10-17 19:30:21.892679] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b757c0, cid 4, qid 0 00:22:58.357 [2024-10-17 19:30:21.892825] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.357 [2024-10-17 19:30:21.892831] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.357 [2024-10-17 19:30:21.892835] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892838] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b13760): datao=0, datal=8, cccid=4 00:22:58.357 [2024-10-17 19:30:21.892842] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b757c0) on tqpair(0x1b13760): expected_datao=0, payload_size=8 00:22:58.357 [2024-10-17 19:30:21.892846] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892852] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.892855] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.933737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.357 [2024-10-17 19:30:21.933746] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.357 [2024-10-17 19:30:21.933749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.357 [2024-10-17 19:30:21.933752] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b757c0) on tqpair=0x1b13760 00:22:58.357 ===================================================== 00:22:58.357 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:58.357 ===================================================== 00:22:58.357 Controller Capabilities/Features 00:22:58.357 ================================ 00:22:58.357 Vendor ID: 0000 00:22:58.357 Subsystem Vendor ID: 0000 00:22:58.357 Serial Number: .................... 00:22:58.357 Model Number: ........................................ 00:22:58.357 Firmware Version: 25.01 00:22:58.357 Recommended Arb Burst: 0 00:22:58.357 IEEE OUI Identifier: 00 00 00 00:22:58.357 Multi-path I/O 00:22:58.357 May have multiple subsystem ports: No 00:22:58.357 May have multiple controllers: No 00:22:58.357 Associated with SR-IOV VF: No 00:22:58.357 Max Data Transfer Size: 131072 00:22:58.357 Max Number of Namespaces: 0 00:22:58.357 Max Number of I/O Queues: 1024 00:22:58.357 NVMe Specification Version (VS): 1.3 00:22:58.357 NVMe Specification Version (Identify): 1.3 00:22:58.357 Maximum Queue Entries: 128 00:22:58.357 Contiguous Queues Required: Yes 00:22:58.357 Arbitration Mechanisms Supported 00:22:58.357 Weighted Round Robin: Not Supported 00:22:58.357 Vendor Specific: Not Supported 00:22:58.357 Reset Timeout: 15000 ms 00:22:58.357 Doorbell Stride: 4 bytes 00:22:58.357 NVM Subsystem Reset: Not Supported 00:22:58.357 Command Sets Supported 00:22:58.357 NVM Command Set: Supported 00:22:58.357 Boot Partition: Not Supported 00:22:58.357 Memory Page Size Minimum: 4096 bytes 00:22:58.357 Memory Page Size Maximum: 4096 bytes 00:22:58.357 Persistent Memory Region: Not Supported 00:22:58.357 Optional Asynchronous Events Supported 00:22:58.357 Namespace Attribute Notices: Not Supported 00:22:58.357 Firmware Activation Notices: Not Supported 00:22:58.357 ANA Change Notices: Not Supported 00:22:58.357 PLE Aggregate Log Change Notices: Not Supported 00:22:58.357 LBA Status Info Alert Notices: Not Supported 00:22:58.357 EGE Aggregate Log Change Notices: Not Supported 00:22:58.357 Normal NVM Subsystem Shutdown event: Not Supported 00:22:58.357 Zone Descriptor Change Notices: Not Supported 00:22:58.357 Discovery Log Change Notices: Supported 00:22:58.357 Controller Attributes 00:22:58.357 128-bit Host Identifier: Not Supported 00:22:58.357 Non-Operational Permissive Mode: Not Supported 00:22:58.357 NVM Sets: Not Supported 00:22:58.357 Read Recovery Levels: Not Supported 00:22:58.357 Endurance Groups: Not Supported 00:22:58.357 Predictable Latency Mode: Not Supported 00:22:58.357 Traffic Based Keep ALive: Not Supported 00:22:58.357 Namespace Granularity: Not Supported 00:22:58.357 SQ Associations: Not Supported 00:22:58.357 UUID List: Not Supported 00:22:58.357 Multi-Domain Subsystem: Not Supported 00:22:58.357 Fixed Capacity Management: Not Supported 00:22:58.357 Variable Capacity Management: Not Supported 00:22:58.357 Delete Endurance Group: Not Supported 00:22:58.357 Delete NVM Set: Not Supported 00:22:58.357 Extended LBA Formats Supported: Not Supported 00:22:58.357 Flexible Data Placement Supported: Not Supported 00:22:58.357 00:22:58.357 Controller Memory Buffer Support 00:22:58.357 ================================ 00:22:58.357 Supported: No 00:22:58.357 00:22:58.357 Persistent Memory Region Support 00:22:58.357 ================================ 00:22:58.357 Supported: No 00:22:58.357 00:22:58.357 Admin Command Set Attributes 00:22:58.357 ============================ 00:22:58.357 Security Send/Receive: Not Supported 00:22:58.357 Format NVM: Not Supported 00:22:58.357 Firmware Activate/Download: Not Supported 00:22:58.357 Namespace Management: Not Supported 00:22:58.357 Device Self-Test: Not Supported 00:22:58.357 Directives: Not Supported 00:22:58.357 NVMe-MI: Not Supported 00:22:58.357 Virtualization Management: Not Supported 00:22:58.357 Doorbell Buffer Config: Not Supported 00:22:58.357 Get LBA Status Capability: Not Supported 00:22:58.357 Command & Feature Lockdown Capability: Not Supported 00:22:58.357 Abort Command Limit: 1 00:22:58.357 Async Event Request Limit: 4 00:22:58.357 Number of Firmware Slots: N/A 00:22:58.357 Firmware Slot 1 Read-Only: N/A 00:22:58.357 Firmware Activation Without Reset: N/A 00:22:58.357 Multiple Update Detection Support: N/A 00:22:58.357 Firmware Update Granularity: No Information Provided 00:22:58.358 Per-Namespace SMART Log: No 00:22:58.358 Asymmetric Namespace Access Log Page: Not Supported 00:22:58.358 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:58.358 Command Effects Log Page: Not Supported 00:22:58.358 Get Log Page Extended Data: Supported 00:22:58.358 Telemetry Log Pages: Not Supported 00:22:58.358 Persistent Event Log Pages: Not Supported 00:22:58.358 Supported Log Pages Log Page: May Support 00:22:58.358 Commands Supported & Effects Log Page: Not Supported 00:22:58.358 Feature Identifiers & Effects Log Page:May Support 00:22:58.358 NVMe-MI Commands & Effects Log Page: May Support 00:22:58.358 Data Area 4 for Telemetry Log: Not Supported 00:22:58.358 Error Log Page Entries Supported: 128 00:22:58.358 Keep Alive: Not Supported 00:22:58.358 00:22:58.358 NVM Command Set Attributes 00:22:58.358 ========================== 00:22:58.358 Submission Queue Entry Size 00:22:58.358 Max: 1 00:22:58.358 Min: 1 00:22:58.358 Completion Queue Entry Size 00:22:58.358 Max: 1 00:22:58.358 Min: 1 00:22:58.358 Number of Namespaces: 0 00:22:58.358 Compare Command: Not Supported 00:22:58.358 Write Uncorrectable Command: Not Supported 00:22:58.358 Dataset Management Command: Not Supported 00:22:58.358 Write Zeroes Command: Not Supported 00:22:58.358 Set Features Save Field: Not Supported 00:22:58.358 Reservations: Not Supported 00:22:58.358 Timestamp: Not Supported 00:22:58.358 Copy: Not Supported 00:22:58.358 Volatile Write Cache: Not Present 00:22:58.358 Atomic Write Unit (Normal): 1 00:22:58.358 Atomic Write Unit (PFail): 1 00:22:58.358 Atomic Compare & Write Unit: 1 00:22:58.358 Fused Compare & Write: Supported 00:22:58.358 Scatter-Gather List 00:22:58.358 SGL Command Set: Supported 00:22:58.358 SGL Keyed: Supported 00:22:58.358 SGL Bit Bucket Descriptor: Not Supported 00:22:58.358 SGL Metadata Pointer: Not Supported 00:22:58.358 Oversized SGL: Not Supported 00:22:58.358 SGL Metadata Address: Not Supported 00:22:58.358 SGL Offset: Supported 00:22:58.358 Transport SGL Data Block: Not Supported 00:22:58.358 Replay Protected Memory Block: Not Supported 00:22:58.358 00:22:58.358 Firmware Slot Information 00:22:58.358 ========================= 00:22:58.358 Active slot: 0 00:22:58.358 00:22:58.358 00:22:58.358 Error Log 00:22:58.358 ========= 00:22:58.358 00:22:58.358 Active Namespaces 00:22:58.358 ================= 00:22:58.358 Discovery Log Page 00:22:58.358 ================== 00:22:58.358 Generation Counter: 2 00:22:58.358 Number of Records: 2 00:22:58.358 Record Format: 0 00:22:58.358 00:22:58.358 Discovery Log Entry 0 00:22:58.358 ---------------------- 00:22:58.358 Transport Type: 3 (TCP) 00:22:58.358 Address Family: 1 (IPv4) 00:22:58.358 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:58.358 Entry Flags: 00:22:58.358 Duplicate Returned Information: 1 00:22:58.358 Explicit Persistent Connection Support for Discovery: 1 00:22:58.358 Transport Requirements: 00:22:58.358 Secure Channel: Not Required 00:22:58.358 Port ID: 0 (0x0000) 00:22:58.358 Controller ID: 65535 (0xffff) 00:22:58.358 Admin Max SQ Size: 128 00:22:58.358 Transport Service Identifier: 4420 00:22:58.358 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:58.358 Transport Address: 10.0.0.2 00:22:58.358 Discovery Log Entry 1 00:22:58.358 ---------------------- 00:22:58.358 Transport Type: 3 (TCP) 00:22:58.358 Address Family: 1 (IPv4) 00:22:58.358 Subsystem Type: 2 (NVM Subsystem) 00:22:58.358 Entry Flags: 00:22:58.358 Duplicate Returned Information: 0 00:22:58.358 Explicit Persistent Connection Support for Discovery: 0 00:22:58.358 Transport Requirements: 00:22:58.358 Secure Channel: Not Required 00:22:58.358 Port ID: 0 (0x0000) 00:22:58.358 Controller ID: 65535 (0xffff) 00:22:58.358 Admin Max SQ Size: 128 00:22:58.358 Transport Service Identifier: 4420 00:22:58.358 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:58.358 Transport Address: 10.0.0.2 [2024-10-17 19:30:21.933835] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:58.358 [2024-10-17 19:30:21.933846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b751c0) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.933853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.358 [2024-10-17 19:30:21.933858] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75340) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.933862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.358 [2024-10-17 19:30:21.933866] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b754c0) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.933870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.358 [2024-10-17 19:30:21.933874] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.933878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.358 [2024-10-17 19:30:21.933886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.933889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.933893] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.358 [2024-10-17 19:30:21.933899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.358 [2024-10-17 19:30:21.933912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.358 [2024-10-17 19:30:21.933974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.358 [2024-10-17 19:30:21.933980] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.358 [2024-10-17 19:30:21.933983] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.933987] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.933993] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.933996] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.933999] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.358 [2024-10-17 19:30:21.934005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.358 [2024-10-17 19:30:21.934019] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.358 [2024-10-17 19:30:21.934103] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.358 [2024-10-17 19:30:21.934108] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.358 [2024-10-17 19:30:21.934111] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934114] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.934118] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:58.358 [2024-10-17 19:30:21.934124] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:58.358 [2024-10-17 19:30:21.934133] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934137] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934140] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.358 [2024-10-17 19:30:21.934145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.358 [2024-10-17 19:30:21.934154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.358 [2024-10-17 19:30:21.934220] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.358 [2024-10-17 19:30:21.934225] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.358 [2024-10-17 19:30:21.934228] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934232] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.358 [2024-10-17 19:30:21.934240] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934244] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.358 [2024-10-17 19:30:21.934247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.358 [2024-10-17 19:30:21.934252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934262] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934332] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934347] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934362] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934426] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934432] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934435] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934438] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934446] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934449] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934454] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934469] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934544] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934550] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934553] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934556] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934564] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934568] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934571] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934660] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934669] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934681] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934684] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934687] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934702] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934782] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934788] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934794] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934802] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934809] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934823] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.934895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.934901] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.934904] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934907] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.934915] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934919] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.934922] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.934929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.934938] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935017] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935021] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935024] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.935031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935035] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935038] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.935044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.935052] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935129] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935135] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935138] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.935149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935155] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.935161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.935170] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935236] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935245] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935248] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.935257] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935263] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.935269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.935278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935339] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935344] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935347] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935351] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.935359] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935362] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935365] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.935371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.935381] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935455] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935461] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935464] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.359 [2024-10-17 19:30:21.935475] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935478] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935481] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.359 [2024-10-17 19:30:21.935487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.359 [2024-10-17 19:30:21.935496] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.359 [2024-10-17 19:30:21.935572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.359 [2024-10-17 19:30:21.935578] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.359 [2024-10-17 19:30:21.935581] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.359 [2024-10-17 19:30:21.935584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.935592] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935596] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935598] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.935608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.935617] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.935679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.935685] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.935688] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935691] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.935699] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935703] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935706] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.935711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.935720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.935779] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.935784] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.935788] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935791] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.935799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935802] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.935811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.935820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.935880] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.935886] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.935889] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935892] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.935899] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935906] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.935912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.935921] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.935977] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.935982] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.935985] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.935988] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.935996] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936000] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936003] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936008] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936018] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936087] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936092] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.936095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936098] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.936107] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936114] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936128] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936187] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936193] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.936196] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.936207] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936211] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936213] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936293] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.936296] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936300] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.936307] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936314] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936329] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.936396] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.936407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936429] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936489] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936495] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.936498] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936501] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.936509] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936513] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.936515] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.936521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.936530] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.936592] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.936598] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.940606] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.940619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.360 [2024-10-17 19:30:21.940631] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.940634] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.940638] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b13760) 00:22:58.360 [2024-10-17 19:30:21.940644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.360 [2024-10-17 19:30:21.940656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b75640, cid 3, qid 0 00:22:58.360 [2024-10-17 19:30:21.940806] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.360 [2024-10-17 19:30:21.940812] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.360 [2024-10-17 19:30:21.940817] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.360 [2024-10-17 19:30:21.940821] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b75640) on tqpair=0x1b13760 00:22:58.361 [2024-10-17 19:30:21.940827] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:58.361 00:22:58.361 19:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:58.361 [2024-10-17 19:30:21.979460] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:22:58.361 [2024-10-17 19:30:21.979507] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178932 ] 00:22:58.361 [2024-10-17 19:30:22.018771] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:58.361 [2024-10-17 19:30:22.018813] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:58.361 [2024-10-17 19:30:22.018818] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:58.361 [2024-10-17 19:30:22.018828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:58.361 [2024-10-17 19:30:22.018836] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:58.361 [2024-10-17 19:30:22.022786] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:58.361 [2024-10-17 19:30:22.022815] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd11760 0 00:22:58.361 [2024-10-17 19:30:22.029611] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:58.361 [2024-10-17 19:30:22.029624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:58.361 [2024-10-17 19:30:22.029628] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:58.361 [2024-10-17 19:30:22.029631] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:58.361 [2024-10-17 19:30:22.029656] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.029661] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.029664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.029673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:58.361 [2024-10-17 19:30:22.029689] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.036608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.036616] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.036619] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.036632] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:58.361 [2024-10-17 19:30:22.036638] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:58.361 [2024-10-17 19:30:22.036642] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:58.361 [2024-10-17 19:30:22.036652] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036658] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036662] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.036668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.361 [2024-10-17 19:30:22.036681] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.036835] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.036841] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.036844] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036847] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.036852] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:58.361 [2024-10-17 19:30:22.036858] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:58.361 [2024-10-17 19:30:22.036864] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036870] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.036876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.361 [2024-10-17 19:30:22.036886] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.036950] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.036955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.036958] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036962] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.036966] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:58.361 [2024-10-17 19:30:22.036972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.036978] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036981] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.036984] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.036990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.361 [2024-10-17 19:30:22.036999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.037066] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.037072] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.037075] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037078] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.037083] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.037091] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037094] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037097] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.037103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.361 [2024-10-17 19:30:22.037114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.037184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.037190] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.037193] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037196] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.037200] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:58.361 [2024-10-17 19:30:22.037204] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.037210] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.037315] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:58.361 [2024-10-17 19:30:22.037318] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.037325] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037328] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037331] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.361 [2024-10-17 19:30:22.037336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.361 [2024-10-17 19:30:22.037346] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.361 [2024-10-17 19:30:22.037406] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.361 [2024-10-17 19:30:22.037412] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.361 [2024-10-17 19:30:22.037415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037418] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.361 [2024-10-17 19:30:22.037422] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:58.361 [2024-10-17 19:30:22.037430] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037434] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.361 [2024-10-17 19:30:22.037437] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.362 [2024-10-17 19:30:22.037452] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.362 [2024-10-17 19:30:22.037525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.362 [2024-10-17 19:30:22.037530] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.362 [2024-10-17 19:30:22.037533] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037536] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.362 [2024-10-17 19:30:22.037540] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:58.362 [2024-10-17 19:30:22.037544] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037550] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:58.362 [2024-10-17 19:30:22.037560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037572] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.362 [2024-10-17 19:30:22.037588] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.362 [2024-10-17 19:30:22.037694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.362 [2024-10-17 19:30:22.037700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.362 [2024-10-17 19:30:22.037703] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037706] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=4096, cccid=0 00:22:58.362 [2024-10-17 19:30:22.037710] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd731c0) on tqpair(0xd11760): expected_datao=0, payload_size=4096 00:22:58.362 [2024-10-17 19:30:22.037714] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037720] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037723] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037735] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.362 [2024-10-17 19:30:22.037741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.362 [2024-10-17 19:30:22.037744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037747] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.362 [2024-10-17 19:30:22.037753] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:58.362 [2024-10-17 19:30:22.037757] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:58.362 [2024-10-17 19:30:22.037760] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:58.362 [2024-10-17 19:30:22.037764] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:58.362 [2024-10-17 19:30:22.037768] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:58.362 [2024-10-17 19:30:22.037772] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037785] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037791] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037797] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:58.362 [2024-10-17 19:30:22.037808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.362 [2024-10-17 19:30:22.037867] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.362 [2024-10-17 19:30:22.037872] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.362 [2024-10-17 19:30:22.037875] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037879] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.362 [2024-10-17 19:30:22.037884] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037887] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037894] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.362 [2024-10-17 19:30:22.037904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037908] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037911] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.362 [2024-10-17 19:30:22.037920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037924] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037927] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.362 [2024-10-17 19:30:22.037936] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037940] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.362 [2024-10-17 19:30:22.037951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.037967] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.037970] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.037976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.362 [2024-10-17 19:30:22.037986] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd731c0, cid 0, qid 0 00:22:58.362 [2024-10-17 19:30:22.037991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73340, cid 1, qid 0 00:22:58.362 [2024-10-17 19:30:22.037995] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd734c0, cid 2, qid 0 00:22:58.362 [2024-10-17 19:30:22.037999] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.362 [2024-10-17 19:30:22.038003] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.362 [2024-10-17 19:30:22.038100] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.362 [2024-10-17 19:30:22.038106] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.362 [2024-10-17 19:30:22.038109] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.038112] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.362 [2024-10-17 19:30:22.038116] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:58.362 [2024-10-17 19:30:22.038121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.038129] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.038137] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:58.362 [2024-10-17 19:30:22.038143] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.038147] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.362 [2024-10-17 19:30:22.038150] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.362 [2024-10-17 19:30:22.038155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:58.363 [2024-10-17 19:30:22.038164] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.363 [2024-10-17 19:30:22.038231] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.038236] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.038239] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038242] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.038293] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038303] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038310] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038313] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.038319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.038329] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.363 [2024-10-17 19:30:22.038405] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.363 [2024-10-17 19:30:22.038411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.363 [2024-10-17 19:30:22.038414] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038417] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=4096, cccid=4 00:22:58.363 [2024-10-17 19:30:22.038421] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd737c0) on tqpair(0xd11760): expected_datao=0, payload_size=4096 00:22:58.363 [2024-10-17 19:30:22.038425] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038430] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038433] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.038450] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.038453] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038457] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.038465] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:58.363 [2024-10-17 19:30:22.038473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038481] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038487] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.038495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.038505] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.363 [2024-10-17 19:30:22.038587] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.363 [2024-10-17 19:30:22.038592] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.363 [2024-10-17 19:30:22.038595] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038598] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=4096, cccid=4 00:22:58.363 [2024-10-17 19:30:22.038606] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd737c0) on tqpair(0xd11760): expected_datao=0, payload_size=4096 00:22:58.363 [2024-10-17 19:30:22.038610] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038621] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038625] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038651] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.038656] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.038659] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038663] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.038674] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038682] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038689] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038692] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.038697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.038708] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.363 [2024-10-17 19:30:22.038775] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.363 [2024-10-17 19:30:22.038780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.363 [2024-10-17 19:30:22.038783] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038786] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=4096, cccid=4 00:22:58.363 [2024-10-17 19:30:22.038790] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd737c0) on tqpair(0xd11760): expected_datao=0, payload_size=4096 00:22:58.363 [2024-10-17 19:30:22.038794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038804] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038808] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038837] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.038843] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.038846] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038849] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.038856] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038875] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038890] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:58.363 [2024-10-17 19:30:22.038894] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:58.363 [2024-10-17 19:30:22.038898] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:58.363 [2024-10-17 19:30:22.038911] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038914] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.038920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.038925] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038928] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.038932] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.038937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:58.363 [2024-10-17 19:30:22.038948] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.363 [2024-10-17 19:30:22.038953] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73940, cid 5, qid 0 00:22:58.363 [2024-10-17 19:30:22.039033] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.039039] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.039042] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.039045] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.039051] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.039056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.039059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.039062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73940) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.039069] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.039073] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.039078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.039088] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73940, cid 5, qid 0 00:22:58.363 [2024-10-17 19:30:22.039159] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.039165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.363 [2024-10-17 19:30:22.039168] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.039171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73940) on tqpair=0xd11760 00:22:58.363 [2024-10-17 19:30:22.039178] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.363 [2024-10-17 19:30:22.039182] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd11760) 00:22:58.363 [2024-10-17 19:30:22.039187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.363 [2024-10-17 19:30:22.039197] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73940, cid 5, qid 0 00:22:58.363 [2024-10-17 19:30:22.039258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.363 [2024-10-17 19:30:22.039264] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039267] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039270] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73940) on tqpair=0xd11760 00:22:58.364 [2024-10-17 19:30:22.039278] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039281] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd11760) 00:22:58.364 [2024-10-17 19:30:22.039287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.364 [2024-10-17 19:30:22.039296] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73940, cid 5, qid 0 00:22:58.364 [2024-10-17 19:30:22.039357] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.364 [2024-10-17 19:30:22.039363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039369] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73940) on tqpair=0xd11760 00:22:58.364 [2024-10-17 19:30:22.039381] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039385] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd11760) 00:22:58.364 [2024-10-17 19:30:22.039390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.364 [2024-10-17 19:30:22.039396] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039400] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd11760) 00:22:58.364 [2024-10-17 19:30:22.039405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.364 [2024-10-17 19:30:22.039411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039414] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd11760) 00:22:58.364 [2024-10-17 19:30:22.039419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.364 [2024-10-17 19:30:22.039426] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039430] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd11760) 00:22:58.364 [2024-10-17 19:30:22.039435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.364 [2024-10-17 19:30:22.039445] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73940, cid 5, qid 0 00:22:58.364 [2024-10-17 19:30:22.039450] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd737c0, cid 4, qid 0 00:22:58.364 [2024-10-17 19:30:22.039454] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73ac0, cid 6, qid 0 00:22:58.364 [2024-10-17 19:30:22.039458] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73c40, cid 7, qid 0 00:22:58.364 [2024-10-17 19:30:22.039604] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.364 [2024-10-17 19:30:22.039611] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.364 [2024-10-17 19:30:22.039614] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039617] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=8192, cccid=5 00:22:58.364 [2024-10-17 19:30:22.039621] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd73940) on tqpair(0xd11760): expected_datao=0, payload_size=8192 00:22:58.364 [2024-10-17 19:30:22.039626] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039637] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039641] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039649] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.364 [2024-10-17 19:30:22.039653] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.364 [2024-10-17 19:30:22.039656] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039659] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=512, cccid=4 00:22:58.364 [2024-10-17 19:30:22.039663] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd737c0) on tqpair(0xd11760): expected_datao=0, payload_size=512 00:22:58.364 [2024-10-17 19:30:22.039667] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039672] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039675] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.364 [2024-10-17 19:30:22.039684] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.364 [2024-10-17 19:30:22.039687] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039690] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=512, cccid=6 00:22:58.364 [2024-10-17 19:30:22.039694] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd73ac0) on tqpair(0xd11760): expected_datao=0, payload_size=512 00:22:58.364 [2024-10-17 19:30:22.039698] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039703] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039706] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:58.364 [2024-10-17 19:30:22.039715] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:58.364 [2024-10-17 19:30:22.039718] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd11760): datao=0, datal=4096, cccid=7 00:22:58.364 [2024-10-17 19:30:22.039725] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd73c40) on tqpair(0xd11760): expected_datao=0, payload_size=4096 00:22:58.364 [2024-10-17 19:30:22.039728] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039734] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039737] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039744] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.364 [2024-10-17 19:30:22.039749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039752] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73940) on tqpair=0xd11760 00:22:58.364 [2024-10-17 19:30:22.039766] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.364 [2024-10-17 19:30:22.039771] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039774] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039777] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd737c0) on tqpair=0xd11760 00:22:58.364 [2024-10-17 19:30:22.039785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.364 [2024-10-17 19:30:22.039790] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039793] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039797] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73ac0) on tqpair=0xd11760 00:22:58.364 [2024-10-17 19:30:22.039803] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.364 [2024-10-17 19:30:22.039808] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.364 [2024-10-17 19:30:22.039811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.364 [2024-10-17 19:30:22.039814] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73c40) on tqpair=0xd11760 00:22:58.364 ===================================================== 00:22:58.364 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:58.364 ===================================================== 00:22:58.364 Controller Capabilities/Features 00:22:58.364 ================================ 00:22:58.364 Vendor ID: 8086 00:22:58.364 Subsystem Vendor ID: 8086 00:22:58.364 Serial Number: SPDK00000000000001 00:22:58.364 Model Number: SPDK bdev Controller 00:22:58.364 Firmware Version: 25.01 00:22:58.364 Recommended Arb Burst: 6 00:22:58.364 IEEE OUI Identifier: e4 d2 5c 00:22:58.364 Multi-path I/O 00:22:58.364 May have multiple subsystem ports: Yes 00:22:58.364 May have multiple controllers: Yes 00:22:58.364 Associated with SR-IOV VF: No 00:22:58.364 Max Data Transfer Size: 131072 00:22:58.364 Max Number of Namespaces: 32 00:22:58.364 Max Number of I/O Queues: 127 00:22:58.364 NVMe Specification Version (VS): 1.3 00:22:58.364 NVMe Specification Version (Identify): 1.3 00:22:58.364 Maximum Queue Entries: 128 00:22:58.364 Contiguous Queues Required: Yes 00:22:58.364 Arbitration Mechanisms Supported 00:22:58.364 Weighted Round Robin: Not Supported 00:22:58.364 Vendor Specific: Not Supported 00:22:58.364 Reset Timeout: 15000 ms 00:22:58.364 Doorbell Stride: 4 bytes 00:22:58.364 NVM Subsystem Reset: Not Supported 00:22:58.364 Command Sets Supported 00:22:58.364 NVM Command Set: Supported 00:22:58.364 Boot Partition: Not Supported 00:22:58.364 Memory Page Size Minimum: 4096 bytes 00:22:58.364 Memory Page Size Maximum: 4096 bytes 00:22:58.364 Persistent Memory Region: Not Supported 00:22:58.364 Optional Asynchronous Events Supported 00:22:58.364 Namespace Attribute Notices: Supported 00:22:58.364 Firmware Activation Notices: Not Supported 00:22:58.364 ANA Change Notices: Not Supported 00:22:58.364 PLE Aggregate Log Change Notices: Not Supported 00:22:58.365 LBA Status Info Alert Notices: Not Supported 00:22:58.365 EGE Aggregate Log Change Notices: Not Supported 00:22:58.365 Normal NVM Subsystem Shutdown event: Not Supported 00:22:58.365 Zone Descriptor Change Notices: Not Supported 00:22:58.365 Discovery Log Change Notices: Not Supported 00:22:58.365 Controller Attributes 00:22:58.365 128-bit Host Identifier: Supported 00:22:58.365 Non-Operational Permissive Mode: Not Supported 00:22:58.365 NVM Sets: Not Supported 00:22:58.365 Read Recovery Levels: Not Supported 00:22:58.365 Endurance Groups: Not Supported 00:22:58.365 Predictable Latency Mode: Not Supported 00:22:58.365 Traffic Based Keep ALive: Not Supported 00:22:58.365 Namespace Granularity: Not Supported 00:22:58.365 SQ Associations: Not Supported 00:22:58.365 UUID List: Not Supported 00:22:58.365 Multi-Domain Subsystem: Not Supported 00:22:58.365 Fixed Capacity Management: Not Supported 00:22:58.365 Variable Capacity Management: Not Supported 00:22:58.365 Delete Endurance Group: Not Supported 00:22:58.365 Delete NVM Set: Not Supported 00:22:58.365 Extended LBA Formats Supported: Not Supported 00:22:58.365 Flexible Data Placement Supported: Not Supported 00:22:58.365 00:22:58.365 Controller Memory Buffer Support 00:22:58.365 ================================ 00:22:58.365 Supported: No 00:22:58.365 00:22:58.365 Persistent Memory Region Support 00:22:58.365 ================================ 00:22:58.365 Supported: No 00:22:58.365 00:22:58.365 Admin Command Set Attributes 00:22:58.365 ============================ 00:22:58.365 Security Send/Receive: Not Supported 00:22:58.365 Format NVM: Not Supported 00:22:58.365 Firmware Activate/Download: Not Supported 00:22:58.365 Namespace Management: Not Supported 00:22:58.365 Device Self-Test: Not Supported 00:22:58.365 Directives: Not Supported 00:22:58.365 NVMe-MI: Not Supported 00:22:58.365 Virtualization Management: Not Supported 00:22:58.365 Doorbell Buffer Config: Not Supported 00:22:58.365 Get LBA Status Capability: Not Supported 00:22:58.365 Command & Feature Lockdown Capability: Not Supported 00:22:58.365 Abort Command Limit: 4 00:22:58.365 Async Event Request Limit: 4 00:22:58.365 Number of Firmware Slots: N/A 00:22:58.365 Firmware Slot 1 Read-Only: N/A 00:22:58.365 Firmware Activation Without Reset: N/A 00:22:58.365 Multiple Update Detection Support: N/A 00:22:58.365 Firmware Update Granularity: No Information Provided 00:22:58.365 Per-Namespace SMART Log: No 00:22:58.365 Asymmetric Namespace Access Log Page: Not Supported 00:22:58.365 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:58.365 Command Effects Log Page: Supported 00:22:58.365 Get Log Page Extended Data: Supported 00:22:58.365 Telemetry Log Pages: Not Supported 00:22:58.365 Persistent Event Log Pages: Not Supported 00:22:58.365 Supported Log Pages Log Page: May Support 00:22:58.365 Commands Supported & Effects Log Page: Not Supported 00:22:58.365 Feature Identifiers & Effects Log Page:May Support 00:22:58.365 NVMe-MI Commands & Effects Log Page: May Support 00:22:58.365 Data Area 4 for Telemetry Log: Not Supported 00:22:58.365 Error Log Page Entries Supported: 128 00:22:58.365 Keep Alive: Supported 00:22:58.365 Keep Alive Granularity: 10000 ms 00:22:58.365 00:22:58.365 NVM Command Set Attributes 00:22:58.365 ========================== 00:22:58.365 Submission Queue Entry Size 00:22:58.365 Max: 64 00:22:58.365 Min: 64 00:22:58.365 Completion Queue Entry Size 00:22:58.365 Max: 16 00:22:58.365 Min: 16 00:22:58.365 Number of Namespaces: 32 00:22:58.365 Compare Command: Supported 00:22:58.365 Write Uncorrectable Command: Not Supported 00:22:58.365 Dataset Management Command: Supported 00:22:58.365 Write Zeroes Command: Supported 00:22:58.365 Set Features Save Field: Not Supported 00:22:58.365 Reservations: Supported 00:22:58.365 Timestamp: Not Supported 00:22:58.365 Copy: Supported 00:22:58.365 Volatile Write Cache: Present 00:22:58.365 Atomic Write Unit (Normal): 1 00:22:58.365 Atomic Write Unit (PFail): 1 00:22:58.365 Atomic Compare & Write Unit: 1 00:22:58.365 Fused Compare & Write: Supported 00:22:58.365 Scatter-Gather List 00:22:58.365 SGL Command Set: Supported 00:22:58.365 SGL Keyed: Supported 00:22:58.365 SGL Bit Bucket Descriptor: Not Supported 00:22:58.365 SGL Metadata Pointer: Not Supported 00:22:58.365 Oversized SGL: Not Supported 00:22:58.365 SGL Metadata Address: Not Supported 00:22:58.365 SGL Offset: Supported 00:22:58.365 Transport SGL Data Block: Not Supported 00:22:58.365 Replay Protected Memory Block: Not Supported 00:22:58.365 00:22:58.365 Firmware Slot Information 00:22:58.365 ========================= 00:22:58.365 Active slot: 1 00:22:58.365 Slot 1 Firmware Revision: 25.01 00:22:58.365 00:22:58.365 00:22:58.365 Commands Supported and Effects 00:22:58.365 ============================== 00:22:58.365 Admin Commands 00:22:58.365 -------------- 00:22:58.365 Get Log Page (02h): Supported 00:22:58.365 Identify (06h): Supported 00:22:58.365 Abort (08h): Supported 00:22:58.365 Set Features (09h): Supported 00:22:58.365 Get Features (0Ah): Supported 00:22:58.365 Asynchronous Event Request (0Ch): Supported 00:22:58.365 Keep Alive (18h): Supported 00:22:58.365 I/O Commands 00:22:58.365 ------------ 00:22:58.365 Flush (00h): Supported LBA-Change 00:22:58.365 Write (01h): Supported LBA-Change 00:22:58.365 Read (02h): Supported 00:22:58.365 Compare (05h): Supported 00:22:58.365 Write Zeroes (08h): Supported LBA-Change 00:22:58.365 Dataset Management (09h): Supported LBA-Change 00:22:58.365 Copy (19h): Supported LBA-Change 00:22:58.365 00:22:58.365 Error Log 00:22:58.365 ========= 00:22:58.365 00:22:58.365 Arbitration 00:22:58.365 =========== 00:22:58.365 Arbitration Burst: 1 00:22:58.365 00:22:58.365 Power Management 00:22:58.365 ================ 00:22:58.365 Number of Power States: 1 00:22:58.365 Current Power State: Power State #0 00:22:58.365 Power State #0: 00:22:58.365 Max Power: 0.00 W 00:22:58.365 Non-Operational State: Operational 00:22:58.365 Entry Latency: Not Reported 00:22:58.365 Exit Latency: Not Reported 00:22:58.365 Relative Read Throughput: 0 00:22:58.365 Relative Read Latency: 0 00:22:58.365 Relative Write Throughput: 0 00:22:58.365 Relative Write Latency: 0 00:22:58.365 Idle Power: Not Reported 00:22:58.365 Active Power: Not Reported 00:22:58.365 Non-Operational Permissive Mode: Not Supported 00:22:58.365 00:22:58.365 Health Information 00:22:58.365 ================== 00:22:58.365 Critical Warnings: 00:22:58.365 Available Spare Space: OK 00:22:58.365 Temperature: OK 00:22:58.365 Device Reliability: OK 00:22:58.365 Read Only: No 00:22:58.365 Volatile Memory Backup: OK 00:22:58.365 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:58.365 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:58.365 Available Spare: 0% 00:22:58.365 Available Spare Threshold: 0% 00:22:58.365 Life Percentage Used:[2024-10-17 19:30:22.039893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.365 [2024-10-17 19:30:22.039898] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd11760) 00:22:58.365 [2024-10-17 19:30:22.039904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.365 [2024-10-17 19:30:22.039915] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73c40, cid 7, qid 0 00:22:58.365 [2024-10-17 19:30:22.043609] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.365 [2024-10-17 19:30:22.043617] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.365 [2024-10-17 19:30:22.043620] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.365 [2024-10-17 19:30:22.043623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73c40) on tqpair=0xd11760 00:22:58.365 [2024-10-17 19:30:22.043653] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:58.365 [2024-10-17 19:30:22.043662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd731c0) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.043668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.366 [2024-10-17 19:30:22.043673] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73340) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.043677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.366 [2024-10-17 19:30:22.043681] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd734c0) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.366 [2024-10-17 19:30:22.043697] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:58.366 [2024-10-17 19:30:22.043709] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.043712] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.043715] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.043721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.043735] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.043906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.043912] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.043914] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.043918] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.043923] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.043927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.043930] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.043936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.043950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044053] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044059] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044062] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044069] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:58.366 [2024-10-17 19:30:22.044073] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:58.366 [2024-10-17 19:30:22.044081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044087] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044102] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044203] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044209] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044212] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044223] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044227] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044230] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044244] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044321] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044327] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044329] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044333] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044341] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044345] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044348] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044456] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044465] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044468] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044476] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044479] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044482] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044623] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044626] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044635] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044658] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044759] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044765] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044768] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044771] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044779] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044783] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044786] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.044859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.044865] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.044868] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044871] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.044879] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044883] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.044886] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.044891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.044901] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.045010] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.045016] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.045019] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.045022] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.366 [2024-10-17 19:30:22.045030] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.045033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.366 [2024-10-17 19:30:22.045037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.366 [2024-10-17 19:30:22.045042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.366 [2024-10-17 19:30:22.045053] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.366 [2024-10-17 19:30:22.045161] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.366 [2024-10-17 19:30:22.045167] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.366 [2024-10-17 19:30:22.045170] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045181] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045184] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045187] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045202] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045311] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045317] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045320] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045323] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045331] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045335] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045338] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045352] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045414] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045420] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045423] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045426] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045434] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045437] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045514] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045520] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045523] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045526] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045538] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045540] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045555] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045616] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045625] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045636] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045640] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045643] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045659] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045767] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045773] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045776] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045779] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045790] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045793] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045865] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045871] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045874] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045877] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045889] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045892] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.045907] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.045967] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.045973] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.045976] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045979] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.045987] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045990] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.045993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.045999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.046008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.046118] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.046125] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.046128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.046140] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.046152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.046161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.367 [2024-10-17 19:30:22.046270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.367 [2024-10-17 19:30:22.046276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.367 [2024-10-17 19:30:22.046279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046282] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.367 [2024-10-17 19:30:22.046290] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.367 [2024-10-17 19:30:22.046297] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.367 [2024-10-17 19:30:22.046303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.367 [2024-10-17 19:30:22.046311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.046371] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.046376] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.046379] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046382] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.046390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046394] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046397] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.046402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.046412] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.046521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.046527] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.046530] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046533] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.046541] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046544] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046547] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.046553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.046562] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.046672] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.046679] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.046683] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046686] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.046695] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046701] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.046706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.046716] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.046774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.046780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.046783] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046786] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.046794] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.046806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.046815] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.046881] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.046887] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.046890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046893] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.046901] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.046908] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.046913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.046922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.047027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.047032] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.047035] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.047038] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.047046] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.047050] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.047053] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.047059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.047068] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.050606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.050615] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.050618] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.050623] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.050633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.050636] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.050639] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd11760) 00:22:58.368 [2024-10-17 19:30:22.050645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:58.368 [2024-10-17 19:30:22.050656] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd73640, cid 3, qid 0 00:22:58.368 [2024-10-17 19:30:22.050807] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:58.368 [2024-10-17 19:30:22.050813] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:58.368 [2024-10-17 19:30:22.050816] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:58.368 [2024-10-17 19:30:22.050819] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd73640) on tqpair=0xd11760 00:22:58.368 [2024-10-17 19:30:22.050825] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:58.368 0% 00:22:58.368 Data Units Read: 0 00:22:58.368 Data Units Written: 0 00:22:58.368 Host Read Commands: 0 00:22:58.368 Host Write Commands: 0 00:22:58.368 Controller Busy Time: 0 minutes 00:22:58.368 Power Cycles: 0 00:22:58.368 Power On Hours: 0 hours 00:22:58.368 Unsafe Shutdowns: 0 00:22:58.368 Unrecoverable Media Errors: 0 00:22:58.368 Lifetime Error Log Entries: 0 00:22:58.368 Warning Temperature Time: 0 minutes 00:22:58.368 Critical Temperature Time: 0 minutes 00:22:58.368 00:22:58.368 Number of Queues 00:22:58.368 ================ 00:22:58.368 Number of I/O Submission Queues: 127 00:22:58.368 Number of I/O Completion Queues: 127 00:22:58.368 00:22:58.368 Active Namespaces 00:22:58.368 ================= 00:22:58.368 Namespace ID:1 00:22:58.368 Error Recovery Timeout: Unlimited 00:22:58.368 Command Set Identifier: NVM (00h) 00:22:58.368 Deallocate: Supported 00:22:58.368 Deallocated/Unwritten Error: Not Supported 00:22:58.368 Deallocated Read Value: Unknown 00:22:58.368 Deallocate in Write Zeroes: Not Supported 00:22:58.368 Deallocated Guard Field: 0xFFFF 00:22:58.368 Flush: Supported 00:22:58.368 Reservation: Supported 00:22:58.368 Namespace Sharing Capabilities: Multiple Controllers 00:22:58.368 Size (in LBAs): 131072 (0GiB) 00:22:58.368 Capacity (in LBAs): 131072 (0GiB) 00:22:58.368 Utilization (in LBAs): 131072 (0GiB) 00:22:58.368 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:58.368 EUI64: ABCDEF0123456789 00:22:58.368 UUID: a1a1af50-ac53-4e5e-a19a-5b811c2f8384 00:22:58.368 Thin Provisioning: Not Supported 00:22:58.368 Per-NS Atomic Units: Yes 00:22:58.368 Atomic Boundary Size (Normal): 0 00:22:58.368 Atomic Boundary Size (PFail): 0 00:22:58.368 Atomic Boundary Offset: 0 00:22:58.368 Maximum Single Source Range Length: 65535 00:22:58.368 Maximum Copy Length: 65535 00:22:58.368 Maximum Source Range Count: 1 00:22:58.368 NGUID/EUI64 Never Reused: No 00:22:58.368 Namespace Write Protected: No 00:22:58.368 Number of LBA Formats: 1 00:22:58.368 Current LBA Format: LBA Format #00 00:22:58.368 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:58.368 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.368 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:58.369 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:58.369 rmmod nvme_tcp 00:22:58.369 rmmod nvme_fabrics 00:22:58.369 rmmod nvme_keyring 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 2178679 ']' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2178679 ']' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2178679' 00:22:58.628 killing process with pid 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2178679 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.628 19:30:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.164 00:23:01.164 real 0m9.914s 00:23:01.164 user 0m7.973s 00:23:01.164 sys 0m4.871s 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:01.164 ************************************ 00:23:01.164 END TEST nvmf_identify 00:23:01.164 ************************************ 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.164 ************************************ 00:23:01.164 START TEST nvmf_perf 00:23:01.164 ************************************ 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:01.164 * Looking for test storage... 00:23:01.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.164 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.165 --rc genhtml_branch_coverage=1 00:23:01.165 --rc genhtml_function_coverage=1 00:23:01.165 --rc genhtml_legend=1 00:23:01.165 --rc geninfo_all_blocks=1 00:23:01.165 --rc geninfo_unexecuted_blocks=1 00:23:01.165 00:23:01.165 ' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.165 --rc genhtml_branch_coverage=1 00:23:01.165 --rc genhtml_function_coverage=1 00:23:01.165 --rc genhtml_legend=1 00:23:01.165 --rc geninfo_all_blocks=1 00:23:01.165 --rc geninfo_unexecuted_blocks=1 00:23:01.165 00:23:01.165 ' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.165 --rc genhtml_branch_coverage=1 00:23:01.165 --rc genhtml_function_coverage=1 00:23:01.165 --rc genhtml_legend=1 00:23:01.165 --rc geninfo_all_blocks=1 00:23:01.165 --rc geninfo_unexecuted_blocks=1 00:23:01.165 00:23:01.165 ' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.165 --rc genhtml_branch_coverage=1 00:23:01.165 --rc genhtml_function_coverage=1 00:23:01.165 --rc genhtml_legend=1 00:23:01.165 --rc geninfo_all_blocks=1 00:23:01.165 --rc geninfo_unexecuted_blocks=1 00:23:01.165 00:23:01.165 ' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.165 19:30:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.734 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:07.734 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:07.734 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:07.734 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:07.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:07.735 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:07.735 Found net devices under 0000:86:00.0: cvl_0_0 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:07.735 Found net devices under 0000:86:00.1: cvl_0_1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:07.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:23:07.735 00:23:07.735 --- 10.0.0.2 ping statistics --- 00:23:07.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.735 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:23:07.735 00:23:07.735 --- 10.0.0.1 ping statistics --- 00:23:07.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.735 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=2182452 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 2182452 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2182452 ']' 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.735 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.736 [2024-10-17 19:30:30.721053] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:23:07.736 [2024-10-17 19:30:30.721095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.736 [2024-10-17 19:30:30.800624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.736 [2024-10-17 19:30:30.842802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.736 [2024-10-17 19:30:30.842841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.736 [2024-10-17 19:30:30.842848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.736 [2024-10-17 19:30:30.842855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.736 [2024-10-17 19:30:30.842860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.736 [2024-10-17 19:30:30.844277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.736 [2024-10-17 19:30:30.844387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.736 [2024-10-17 19:30:30.844496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.736 [2024-10-17 19:30:30.844497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:07.736 19:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:10.272 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:10.272 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:10.531 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:10.531 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:10.790 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:10.790 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:10.790 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:10.790 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:10.790 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:11.049 [2024-10-17 19:30:34.652920] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.049 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.308 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:11.308 19:30:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:11.309 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:11.309 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:11.568 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.832 [2024-10-17 19:30:35.441231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.832 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:12.170 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:12.170 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:12.170 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:12.170 19:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:13.187 Initializing NVMe Controllers 00:23:13.187 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:13.187 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:13.187 Initialization complete. Launching workers. 00:23:13.187 ======================================================== 00:23:13.187 Latency(us) 00:23:13.187 Device Information : IOPS MiB/s Average min max 00:23:13.187 PCIE (0000:5e:00.0) NSID 1 from core 0: 99249.97 387.70 321.90 24.46 5222.83 00:23:13.187 ======================================================== 00:23:13.187 Total : 99249.97 387.70 321.90 24.46 5222.83 00:23:13.188 00:23:13.188 19:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:14.564 Initializing NVMe Controllers 00:23:14.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:14.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:14.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:14.564 Initialization complete. Launching workers. 00:23:14.564 ======================================================== 00:23:14.564 Latency(us) 00:23:14.564 Device Information : IOPS MiB/s Average min max 00:23:14.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.00 0.29 13546.02 107.93 44681.93 00:23:14.564 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19739.69 6982.92 50878.00 00:23:14.564 ======================================================== 00:23:14.564 Total : 126.00 0.49 16052.98 107.93 50878.00 00:23:14.564 00:23:14.564 19:30:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:15.942 Initializing NVMe Controllers 00:23:15.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:15.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:15.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:15.942 Initialization complete. Launching workers. 00:23:15.942 ======================================================== 00:23:15.942 Latency(us) 00:23:15.942 Device Information : IOPS MiB/s Average min max 00:23:15.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11272.00 44.03 2841.11 457.77 7700.62 00:23:15.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3826.00 14.95 8410.50 5511.20 16138.79 00:23:15.942 ======================================================== 00:23:15.942 Total : 15098.00 58.98 4252.45 457.77 16138.79 00:23:15.942 00:23:15.942 19:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:15.942 19:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:15.942 19:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.476 Initializing NVMe Controllers 00:23:18.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.476 Controller IO queue size 128, less than required. 00:23:18.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:18.476 Controller IO queue size 128, less than required. 00:23:18.476 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:18.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:18.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:18.476 Initialization complete. Launching workers. 00:23:18.476 ======================================================== 00:23:18.476 Latency(us) 00:23:18.476 Device Information : IOPS MiB/s Average min max 00:23:18.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1806.93 451.73 71935.01 41307.34 130386.13 00:23:18.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.97 151.24 223145.40 79611.65 322340.13 00:23:18.476 ======================================================== 00:23:18.476 Total : 2411.91 602.98 109862.78 41307.34 322340.13 00:23:18.476 00:23:18.476 19:30:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:18.477 No valid NVMe controllers or AIO or URING devices found 00:23:18.477 Initializing NVMe Controllers 00:23:18.477 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.477 Controller IO queue size 128, less than required. 00:23:18.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:18.477 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:18.477 Controller IO queue size 128, less than required. 00:23:18.477 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:18.477 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:18.477 WARNING: Some requested NVMe devices were skipped 00:23:18.736 19:30:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:21.271 Initializing NVMe Controllers 00:23:21.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.272 Controller IO queue size 128, less than required. 00:23:21.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.272 Controller IO queue size 128, less than required. 00:23:21.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:21.272 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:21.272 Initialization complete. Launching workers. 00:23:21.272 00:23:21.272 ==================== 00:23:21.272 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:21.272 TCP transport: 00:23:21.272 polls: 16701 00:23:21.272 idle_polls: 13216 00:23:21.272 sock_completions: 3485 00:23:21.272 nvme_completions: 5985 00:23:21.272 submitted_requests: 8954 00:23:21.272 queued_requests: 1 00:23:21.272 00:23:21.272 ==================== 00:23:21.272 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:21.272 TCP transport: 00:23:21.272 polls: 17171 00:23:21.272 idle_polls: 12956 00:23:21.272 sock_completions: 4215 00:23:21.272 nvme_completions: 6681 00:23:21.272 submitted_requests: 10060 00:23:21.272 queued_requests: 1 00:23:21.272 ======================================================== 00:23:21.272 Latency(us) 00:23:21.272 Device Information : IOPS MiB/s Average min max 00:23:21.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1494.95 373.74 86835.30 60602.73 138787.24 00:23:21.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1668.82 417.21 77873.88 43001.14 143437.78 00:23:21.272 ======================================================== 00:23:21.272 Total : 3163.77 790.94 82108.34 43001.14 143437.78 00:23:21.272 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.272 19:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.272 rmmod nvme_tcp 00:23:21.272 rmmod nvme_fabrics 00:23:21.272 rmmod nvme_keyring 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 2182452 ']' 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 2182452 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2182452 ']' 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2182452 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.272 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2182452 00:23:21.531 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.531 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.531 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2182452' 00:23:21.531 killing process with pid 2182452 00:23:21.531 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2182452 00:23:21.531 19:30:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2182452 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.437 19:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:25.972 00:23:25.972 real 0m24.714s 00:23:25.972 user 1m4.907s 00:23:25.972 sys 0m8.318s 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:25.972 ************************************ 00:23:25.972 END TEST nvmf_perf 00:23:25.972 ************************************ 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.972 ************************************ 00:23:25.972 START TEST nvmf_fio_host 00:23:25.972 ************************************ 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:25.972 * Looking for test storage... 00:23:25.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:25.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.972 --rc genhtml_branch_coverage=1 00:23:25.972 --rc genhtml_function_coverage=1 00:23:25.972 --rc genhtml_legend=1 00:23:25.972 --rc geninfo_all_blocks=1 00:23:25.972 --rc geninfo_unexecuted_blocks=1 00:23:25.972 00:23:25.972 ' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:25.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.972 --rc genhtml_branch_coverage=1 00:23:25.972 --rc genhtml_function_coverage=1 00:23:25.972 --rc genhtml_legend=1 00:23:25.972 --rc geninfo_all_blocks=1 00:23:25.972 --rc geninfo_unexecuted_blocks=1 00:23:25.972 00:23:25.972 ' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:25.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.972 --rc genhtml_branch_coverage=1 00:23:25.972 --rc genhtml_function_coverage=1 00:23:25.972 --rc genhtml_legend=1 00:23:25.972 --rc geninfo_all_blocks=1 00:23:25.972 --rc geninfo_unexecuted_blocks=1 00:23:25.972 00:23:25.972 ' 00:23:25.972 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:25.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:25.972 --rc genhtml_branch_coverage=1 00:23:25.972 --rc genhtml_function_coverage=1 00:23:25.972 --rc genhtml_legend=1 00:23:25.972 --rc geninfo_all_blocks=1 00:23:25.972 --rc geninfo_unexecuted_blocks=1 00:23:25.972 00:23:25.973 ' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:25.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:25.973 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:25.974 19:30:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.544 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:32.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:32.545 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:32.545 Found net devices under 0000:86:00.0: cvl_0_0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:32.545 Found net devices under 0000:86:00.1: cvl_0_1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:23:32.545 00:23:32.545 --- 10.0.0.2 ping statistics --- 00:23:32.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.545 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:23:32.545 00:23:32.545 --- 10.0.0.1 ping statistics --- 00:23:32.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.545 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2188566 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2188566 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2188566 ']' 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.545 [2024-10-17 19:30:55.544524] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:23:32.545 [2024-10-17 19:30:55.544567] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.545 [2024-10-17 19:30:55.624749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.545 [2024-10-17 19:30:55.666634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.545 [2024-10-17 19:30:55.666670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.545 [2024-10-17 19:30:55.666677] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.545 [2024-10-17 19:30:55.666683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.545 [2024-10-17 19:30:55.666688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.545 [2024-10-17 19:30:55.668247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.545 [2024-10-17 19:30:55.668353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.545 [2024-10-17 19:30:55.668461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.545 [2024-10-17 19:30:55.668462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:32.545 [2024-10-17 19:30:55.932168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.545 19:30:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:32.545 Malloc1 00:23:32.545 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.805 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:33.063 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.063 [2024-10-17 19:30:56.765478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.063 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:33.322 19:30:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:33.322 19:30:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:33.581 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:33.581 fio-3.35 00:23:33.581 Starting 1 thread 00:23:36.114 00:23:36.114 test: (groupid=0, jobs=1): err= 0: pid=2189057: Thu Oct 17 19:30:59 2024 00:23:36.114 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec) 00:23:36.114 slat (nsec): min=1530, max=254146, avg=1700.38, stdev=2187.33 00:23:36.114 clat (usec): min=2823, max=10304, avg=5938.28, stdev=495.32 00:23:36.114 lat (usec): min=2860, max=10306, avg=5939.98, stdev=495.23 00:23:36.114 clat percentiles (usec): 00:23:36.114 | 1.00th=[ 4752], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5538], 00:23:36.114 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:23:36.114 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:23:36.114 | 99.00th=[ 7046], 99.50th=[ 7832], 99.90th=[ 8848], 99.95th=[ 9503], 00:23:36.114 | 99.99th=[10290] 00:23:36.114 bw ( KiB/s): min=46568, max=48192, per=99.97%, avg=47510.00, stdev=702.28, samples=4 00:23:36.114 iops : min=11642, max=12048, avg=11877.50, stdev=175.57, samples=4 00:23:36.114 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec); 0 zone resets 00:23:36.114 slat (nsec): min=1558, max=191671, avg=1761.64, stdev=1477.35 00:23:36.114 clat (usec): min=2265, max=9367, avg=4812.14, stdev=419.26 00:23:36.114 lat (usec): min=2280, max=9369, avg=4813.91, stdev=419.29 00:23:36.114 clat percentiles (usec): 00:23:36.114 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:23:36.114 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:36.114 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5276], 95.00th=[ 5407], 00:23:36.114 | 99.00th=[ 5800], 99.50th=[ 6849], 99.90th=[ 7635], 99.95th=[ 8094], 00:23:36.114 | 99.99th=[ 9372] 00:23:36.114 bw ( KiB/s): min=47024, max=47808, per=100.00%, avg=47308.00, stdev=348.50, samples=4 00:23:36.114 iops : min=11756, max=11952, avg=11827.00, stdev=87.12, samples=4 00:23:36.114 lat (msec) : 4=1.03%, 10=98.96%, 20=0.01% 00:23:36.114 cpu : usr=67.32%, sys=31.24%, ctx=97, majf=0, minf=3 00:23:36.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:36.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:36.114 issued rwts: total=23822,23714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:36.114 00:23:36.114 Run status group 0 (all jobs): 00:23:36.114 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:23:36.115 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.1MB), run=2005-2005msec 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:36.115 19:30:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:36.373 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:36.373 fio-3.35 00:23:36.373 Starting 1 thread 00:23:38.903 00:23:38.903 test: (groupid=0, jobs=1): err= 0: pid=2189609: Thu Oct 17 19:31:02 2024 00:23:38.903 read: IOPS=11.0k, BW=172MiB/s (181MB/s)(346MiB/2005msec) 00:23:38.903 slat (nsec): min=2473, max=81403, avg=2807.10, stdev=1184.75 00:23:38.903 clat (usec): min=1457, max=12779, avg=6653.70, stdev=1526.35 00:23:38.903 lat (usec): min=1459, max=12781, avg=6656.51, stdev=1526.42 00:23:38.903 clat percentiles (usec): 00:23:38.903 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5276], 00:23:38.903 | 30.00th=[ 5735], 40.00th=[ 6259], 50.00th=[ 6652], 60.00th=[ 7111], 00:23:38.903 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8586], 95.00th=[ 9241], 00:23:38.903 | 99.00th=[10814], 99.50th=[11338], 99.90th=[12256], 99.95th=[12518], 00:23:38.903 | 99.99th=[12649] 00:23:38.903 bw ( KiB/s): min=85600, max=93696, per=51.11%, avg=90272.00, stdev=3660.24, samples=4 00:23:38.903 iops : min= 5350, max= 5856, avg=5642.00, stdev=228.76, samples=4 00:23:38.903 write: IOPS=6455, BW=101MiB/s (106MB/s)(184MiB/1823msec); 0 zone resets 00:23:38.903 slat (usec): min=29, max=346, avg=31.56, stdev= 6.27 00:23:38.903 clat (usec): min=3337, max=14297, avg=8589.43, stdev=1473.63 00:23:38.903 lat (usec): min=3366, max=14327, avg=8620.99, stdev=1474.46 00:23:38.903 clat percentiles (usec): 00:23:38.903 | 1.00th=[ 5538], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7308], 00:23:38.903 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:38.903 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11207], 00:23:38.903 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13173], 99.95th=[13435], 00:23:38.903 | 99.99th=[13566] 00:23:38.903 bw ( KiB/s): min=88288, max=97376, per=90.69%, avg=93680.00, stdev=4088.37, samples=4 00:23:38.903 iops : min= 5518, max= 6086, avg=5855.00, stdev=255.52, samples=4 00:23:38.903 lat (msec) : 2=0.04%, 4=1.73%, 10=91.01%, 20=7.21% 00:23:38.903 cpu : usr=85.63%, sys=13.47%, ctx=67, majf=0, minf=3 00:23:38.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:38.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:38.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:38.903 issued rwts: total=22133,11769,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:38.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:38.903 00:23:38.903 Run status group 0 (all jobs): 00:23:38.903 READ: bw=172MiB/s (181MB/s), 172MiB/s-172MiB/s (181MB/s-181MB/s), io=346MiB (363MB), run=2005-2005msec 00:23:38.903 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=184MiB (193MB), run=1823-1823msec 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:38.903 rmmod nvme_tcp 00:23:38.903 rmmod nvme_fabrics 00:23:38.903 rmmod nvme_keyring 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 2188566 ']' 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 2188566 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2188566 ']' 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2188566 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:38.903 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2188566 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2188566' 00:23:39.163 killing process with pid 2188566 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2188566 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2188566 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.163 19:31:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.699 19:31:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:41.699 00:23:41.699 real 0m15.664s 00:23:41.699 user 0m45.984s 00:23:41.699 sys 0m6.565s 00:23:41.699 19:31:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.699 19:31:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 ************************************ 00:23:41.699 END TEST nvmf_fio_host 00:23:41.699 ************************************ 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.699 ************************************ 00:23:41.699 START TEST nvmf_failover 00:23:41.699 ************************************ 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:41.699 * Looking for test storage... 00:23:41.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:41.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.699 --rc genhtml_branch_coverage=1 00:23:41.699 --rc genhtml_function_coverage=1 00:23:41.699 --rc genhtml_legend=1 00:23:41.699 --rc geninfo_all_blocks=1 00:23:41.699 --rc geninfo_unexecuted_blocks=1 00:23:41.699 00:23:41.699 ' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:41.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.699 --rc genhtml_branch_coverage=1 00:23:41.699 --rc genhtml_function_coverage=1 00:23:41.699 --rc genhtml_legend=1 00:23:41.699 --rc geninfo_all_blocks=1 00:23:41.699 --rc geninfo_unexecuted_blocks=1 00:23:41.699 00:23:41.699 ' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:41.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.699 --rc genhtml_branch_coverage=1 00:23:41.699 --rc genhtml_function_coverage=1 00:23:41.699 --rc genhtml_legend=1 00:23:41.699 --rc geninfo_all_blocks=1 00:23:41.699 --rc geninfo_unexecuted_blocks=1 00:23:41.699 00:23:41.699 ' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:41.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.699 --rc genhtml_branch_coverage=1 00:23:41.699 --rc genhtml_function_coverage=1 00:23:41.699 --rc genhtml_legend=1 00:23:41.699 --rc geninfo_all_blocks=1 00:23:41.699 --rc geninfo_unexecuted_blocks=1 00:23:41.699 00:23:41.699 ' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:41.699 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:41.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:41.700 19:31:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.270 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:48.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:48.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:48.271 Found net devices under 0000:86:00.0: cvl_0_0 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:48.271 Found net devices under 0000:86:00.1: cvl_0_1 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.271 19:31:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:48.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:23:48.271 00:23:48.271 --- 10.0.0.2 ping statistics --- 00:23:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.271 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:48.271 00:23:48.271 --- 10.0.0.1 ping statistics --- 00:23:48.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.271 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=2193491 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 2193491 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2193491 ']' 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.271 19:31:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:48.271 [2024-10-17 19:31:11.280597] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:23:48.271 [2024-10-17 19:31:11.280656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.271 [2024-10-17 19:31:11.363605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:48.271 [2024-10-17 19:31:11.405308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.271 [2024-10-17 19:31:11.405344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.271 [2024-10-17 19:31:11.405351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.271 [2024-10-17 19:31:11.405357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.271 [2024-10-17 19:31:11.405362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.272 [2024-10-17 19:31:11.406787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.272 [2024-10-17 19:31:11.406891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.272 [2024-10-17 19:31:11.406893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.529 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:48.786 [2024-10-17 19:31:12.327980] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.786 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:48.786 Malloc0 00:23:49.044 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:49.044 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:49.302 19:31:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.560 [2024-10-17 19:31:13.130220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.560 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:49.818 [2024-10-17 19:31:13.346825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:49.818 [2024-10-17 19:31:13.547468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2193975 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2193975 /var/tmp/bdevperf.sock 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2193975 ']' 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.818 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:50.076 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.076 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:50.076 19:31:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.640 NVMe0n1 00:23:50.640 19:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:50.898 00:23:50.898 19:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2194019 00:23:50.898 19:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:50.899 19:31:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:51.832 19:31:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:52.090 [2024-10-17 19:31:15.695025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.090 [2024-10-17 19:31:15.695233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.091 [2024-10-17 19:31:15.695239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.091 [2024-10-17 19:31:15.695244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.091 [2024-10-17 19:31:15.695250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81390 is same with the state(6) to be set 00:23:52.091 19:31:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:55.372 19:31:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:55.372 00:23:55.372 19:31:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:55.630 [2024-10-17 19:31:19.215133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 [2024-10-17 19:31:19.215270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf821e0 is same with the state(6) to be set 00:23:55.630 19:31:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:58.912 19:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.912 [2024-10-17 19:31:22.423656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.912 19:31:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:59.844 19:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:00.178 [2024-10-17 19:31:23.630543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 [2024-10-17 19:31:23.630635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf83100 is same with the state(6) to be set 00:24:00.178 19:31:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2194019 00:24:06.740 { 00:24:06.740 "results": [ 00:24:06.740 { 00:24:06.740 "job": "NVMe0n1", 00:24:06.740 "core_mask": "0x1", 00:24:06.740 "workload": "verify", 00:24:06.740 "status": "finished", 00:24:06.740 "verify_range": { 00:24:06.740 "start": 0, 00:24:06.740 "length": 16384 00:24:06.740 }, 00:24:06.740 "queue_depth": 128, 00:24:06.740 "io_size": 4096, 00:24:06.740 "runtime": 15.00581, 00:24:06.740 "iops": 11216.921978886845, 00:24:06.740 "mibps": 43.81610148002674, 00:24:06.740 "io_failed": 14397, 00:24:06.740 "io_timeout": 0, 00:24:06.740 "avg_latency_us": 10490.759149385098, 00:24:06.740 "min_latency_us": 425.2038095238095, 00:24:06.740 "max_latency_us": 23218.46857142857 00:24:06.740 } 00:24:06.740 ], 00:24:06.740 "core_count": 1 00:24:06.740 } 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2193975 ']' 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2193975' 00:24:06.740 killing process with pid 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2193975 00:24:06.740 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:06.740 [2024-10-17 19:31:13.624739] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:24:06.740 [2024-10-17 19:31:13.624794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193975 ] 00:24:06.740 [2024-10-17 19:31:13.700671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.740 [2024-10-17 19:31:13.741666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.740 Running I/O for 15 seconds... 00:24:06.740 11560.00 IOPS, 45.16 MiB/s [2024-10-17T17:31:30.524Z] [2024-10-17 19:31:15.696160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-10-17 19:31:15.696195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-10-17 19:31:15.696210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-10-17 19:31:15.696218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-10-17 19:31:15.696226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-10-17 19:31:15.696234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.740 [2024-10-17 19:31:15.696244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.740 [2024-10-17 19:31:15.696251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:100864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:100880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:101032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.741 [2024-10-17 19:31:15.696756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.741 [2024-10-17 19:31:15.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.741 [2024-10-17 19:31:15.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:101176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:101240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.696987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.696995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:101272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:101280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.742 [2024-10-17 19:31:15.697230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101376 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101384 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697305] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101392 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101400 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101408 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101416 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101424 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101432 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101440 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.742 [2024-10-17 19:31:15.697478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.742 [2024-10-17 19:31:15.697485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101448 len:8 PRP1 0x0 PRP2 0x0 00:24:06.742 [2024-10-17 19:31:15.697492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.742 [2024-10-17 19:31:15.697498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697504] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101456 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101464 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101472 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101480 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101488 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101496 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697648] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101504 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101512 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101520 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101528 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101536 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101544 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101552 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101560 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101568 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101576 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101584 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101592 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101600 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101608 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.697976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.697981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.697987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101616 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.697993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.698000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.698005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.698010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101624 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.698016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.698022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.698027] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.698032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.698039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.698047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.698053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.698059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101640 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.698065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.743 [2024-10-17 19:31:15.698071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.743 [2024-10-17 19:31:15.698076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.743 [2024-10-17 19:31:15.698081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101648 len:8 PRP1 0x0 PRP2 0x0 00:24:06.743 [2024-10-17 19:31:15.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.698094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.698099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.698105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.698111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.698118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.698122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.698128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.698134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.698141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.698146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.698152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.698158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.698165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.698170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709762] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:24:06.744 [2024-10-17 19:31:15.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.744 [2024-10-17 19:31:15.709794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.744 [2024-10-17 19:31:15.709802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.744 [2024-10-17 19:31:15.709809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:24:06.745 [2024-10-17 19:31:15.709817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.709826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.745 [2024-10-17 19:31:15.709833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.745 [2024-10-17 19:31:15.709842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:24:06.745 [2024-10-17 19:31:15.709850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.709859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.745 [2024-10-17 19:31:15.709866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.745 [2024-10-17 19:31:15.709874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:24:06.745 [2024-10-17 19:31:15.709883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.709928] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbfaac0 was disconnected and freed. reset controller. 00:24:06.745 [2024-10-17 19:31:15.709940] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:06.745 [2024-10-17 19:31:15.709966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.745 [2024-10-17 19:31:15.709976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.709988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.745 [2024-10-17 19:31:15.709996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.710006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.745 [2024-10-17 19:31:15.710015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.710024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.745 [2024-10-17 19:31:15.710033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:15.710042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.745 [2024-10-17 19:31:15.710077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6400 (9): Bad file descriptor 00:24:06.745 [2024-10-17 19:31:15.713819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.745 [2024-10-17 19:31:15.788303] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.745 11022.00 IOPS, 43.05 MiB/s [2024-10-17T17:31:30.529Z] 11213.00 IOPS, 43.80 MiB/s [2024-10-17T17:31:30.529Z] 11215.00 IOPS, 43.81 MiB/s [2024-10-17T17:31:30.529Z] [2024-10-17 19:31:19.215466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.745 [2024-10-17 19:31:19.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.745 [2024-10-17 19:31:19.215952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.745 [2024-10-17 19:31:19.215959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.215966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.215973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.215981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.215988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.215996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.746 [2024-10-17 19:31:19.216235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.746 [2024-10-17 19:31:19.216558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.746 [2024-10-17 19:31:19.216564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.216987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.216993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.747 [2024-10-17 19:31:19.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.747 [2024-10-17 19:31:19.217206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:19.217289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:19.217376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.748 [2024-10-17 19:31:19.217402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.748 [2024-10-17 19:31:19.217408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46848 len:8 PRP1 0x0 PRP2 0x0 00:24:06.748 [2024-10-17 19:31:19.217414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217454] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc86060 was disconnected and freed. reset controller. 00:24:06.748 [2024-10-17 19:31:19.217464] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:06.748 [2024-10-17 19:31:19.217485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.748 [2024-10-17 19:31:19.217493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.748 [2024-10-17 19:31:19.217507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.748 [2024-10-17 19:31:19.217521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.748 [2024-10-17 19:31:19.217534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:19.217541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.748 [2024-10-17 19:31:19.217570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6400 (9): Bad file descriptor 00:24:06.748 [2024-10-17 19:31:19.220295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.748 [2024-10-17 19:31:19.380984] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.748 10880.80 IOPS, 42.50 MiB/s [2024-10-17T17:31:30.532Z] 10996.67 IOPS, 42.96 MiB/s [2024-10-17T17:31:30.532Z] 11070.43 IOPS, 43.24 MiB/s [2024-10-17T17:31:30.532Z] 11125.62 IOPS, 43.46 MiB/s [2024-10-17T17:31:30.532Z] [2024-10-17 19:31:23.631395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:06.748 [2024-10-17 19:31:23.631576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.748 [2024-10-17 19:31:23.631753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.748 [2024-10-17 19:31:23.631762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.631991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.631998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:105000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.749 [2024-10-17 19:31:23.632237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:105032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.749 [2024-10-17 19:31:23.632243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:105320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:06.750 [2024-10-17 19:31:23.632773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.750 [2024-10-17 19:31:23.632801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105328 len:8 PRP1 0x0 PRP2 0x0 00:24:06.750 [2024-10-17 19:31:23.632808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.750 [2024-10-17 19:31:23.632823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.750 [2024-10-17 19:31:23.632829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105336 len:8 PRP1 0x0 PRP2 0x0 00:24:06.750 [2024-10-17 19:31:23.632835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.750 [2024-10-17 19:31:23.632842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.750 [2024-10-17 19:31:23.632847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.750 [2024-10-17 19:31:23.632852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105344 len:8 PRP1 0x0 PRP2 0x0 00:24:06.750 [2024-10-17 19:31:23.632858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105352 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.632884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105360 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.632907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105368 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.632931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105376 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.632954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105384 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.632978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.632985] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.632990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.632995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105392 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105400 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105408 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105416 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105424 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105432 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105440 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633153] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105448 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633170] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105456 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105464 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105472 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105480 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105488 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105496 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105504 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105512 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.751 [2024-10-17 19:31:23.633368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.751 [2024-10-17 19:31:23.633373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105520 len:8 PRP1 0x0 PRP2 0x0 00:24:06.751 [2024-10-17 19:31:23.633379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.751 [2024-10-17 19:31:23.633385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105528 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633414] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105536 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105544 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105552 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105560 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105568 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.633539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105576 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.633545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.633552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.633556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105584 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.644641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105592 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.644677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105600 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.644707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105608 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.644740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105616 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:06.752 [2024-10-17 19:31:23.644772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:06.752 [2024-10-17 19:31:23.644778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105624 len:8 PRP1 0x0 PRP2 0x0 00:24:06.752 [2024-10-17 19:31:23.644787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644831] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc05280 was disconnected and freed. reset controller. 00:24:06.752 [2024-10-17 19:31:23.644842] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:06.752 [2024-10-17 19:31:23.644868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.752 [2024-10-17 19:31:23.644879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.752 [2024-10-17 19:31:23.644899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.752 [2024-10-17 19:31:23.644918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:06.752 [2024-10-17 19:31:23.644936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:06.752 [2024-10-17 19:31:23.644945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:06.752 [2024-10-17 19:31:23.644975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbd6400 (9): Bad file descriptor 00:24:06.752 [2024-10-17 19:31:23.650409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:06.752 11151.78 IOPS, 43.56 MiB/s [2024-10-17T17:31:30.536Z] [2024-10-17 19:31:23.726453] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.752 11105.10 IOPS, 43.38 MiB/s [2024-10-17T17:31:30.536Z] 11132.82 IOPS, 43.49 MiB/s [2024-10-17T17:31:30.536Z] 11182.75 IOPS, 43.68 MiB/s [2024-10-17T17:31:30.536Z] 11198.00 IOPS, 43.74 MiB/s [2024-10-17T17:31:30.536Z] 11212.21 IOPS, 43.80 MiB/s 00:24:06.752 Latency(us) 00:24:06.752 [2024-10-17T17:31:30.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.752 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:06.752 Verification LBA range: start 0x0 length 0x4000 00:24:06.752 NVMe0n1 : 15.01 11216.92 43.82 959.43 0.00 10490.76 425.20 23218.47 00:24:06.752 [2024-10-17T17:31:30.536Z] =================================================================================================================== 00:24:06.752 [2024-10-17T17:31:30.536Z] Total : 11216.92 43.82 959.43 0.00 10490.76 425.20 23218.47 00:24:06.752 Received shutdown signal, test time was about 15.000000 seconds 00:24:06.752 00:24:06.752 Latency(us) 00:24:06.752 [2024-10-17T17:31:30.536Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.752 [2024-10-17T17:31:30.536Z] =================================================================================================================== 00:24:06.752 [2024-10-17T17:31:30.536Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2196510 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2196510 /var/tmp/bdevperf.sock 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2196510 ']' 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.752 19:31:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:06.752 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:06.752 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:06.752 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:06.752 [2024-10-17 19:31:30.351723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:06.752 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:07.012 [2024-10-17 19:31:30.556297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:07.012 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.271 NVMe0n1 00:24:07.271 19:31:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.530 00:24:07.530 19:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:07.789 00:24:08.066 19:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:08.066 19:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:08.066 19:31:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:08.332 19:31:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:11.621 19:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:11.621 19:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:11.621 19:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.621 19:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2197428 00:24:11.621 19:31:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2197428 00:24:12.558 { 00:24:12.558 "results": [ 00:24:12.558 { 00:24:12.558 "job": "NVMe0n1", 00:24:12.558 "core_mask": "0x1", 00:24:12.558 "workload": "verify", 00:24:12.558 "status": "finished", 00:24:12.558 "verify_range": { 00:24:12.558 "start": 0, 00:24:12.558 "length": 16384 00:24:12.558 }, 00:24:12.558 "queue_depth": 128, 00:24:12.558 "io_size": 4096, 00:24:12.558 "runtime": 1.003194, 00:24:12.558 "iops": 11471.360474643987, 00:24:12.558 "mibps": 44.810001854078074, 00:24:12.558 "io_failed": 0, 00:24:12.558 "io_timeout": 0, 00:24:12.558 "avg_latency_us": 11116.906841451908, 00:24:12.558 "min_latency_us": 2356.175238095238, 00:24:12.558 "max_latency_us": 8925.379047619048 00:24:12.558 } 00:24:12.558 ], 00:24:12.558 "core_count": 1 00:24:12.558 } 00:24:12.558 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:12.558 [2024-10-17 19:31:29.939179] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:24:12.558 [2024-10-17 19:31:29.939232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196510 ] 00:24:12.558 [2024-10-17 19:31:30.019105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.558 [2024-10-17 19:31:30.064647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.558 [2024-10-17 19:31:31.975777] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:12.558 [2024-10-17 19:31:31.975821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.558 [2024-10-17 19:31:31.975833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.558 [2024-10-17 19:31:31.975841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.558 [2024-10-17 19:31:31.975848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.558 [2024-10-17 19:31:31.975856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.558 [2024-10-17 19:31:31.975862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.558 [2024-10-17 19:31:31.975869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.558 [2024-10-17 19:31:31.975875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.558 [2024-10-17 19:31:31.975886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:12.558 [2024-10-17 19:31:31.975910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:12.558 [2024-10-17 19:31:31.975923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2256400 (9): Bad file descriptor 00:24:12.558 [2024-10-17 19:31:31.986428] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:12.558 Running I/O for 1 seconds... 00:24:12.558 11380.00 IOPS, 44.45 MiB/s 00:24:12.558 Latency(us) 00:24:12.558 [2024-10-17T17:31:36.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.558 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:12.558 Verification LBA range: start 0x0 length 0x4000 00:24:12.558 NVMe0n1 : 1.00 11471.36 44.81 0.00 0.00 11116.91 2356.18 8925.38 00:24:12.558 [2024-10-17T17:31:36.342Z] =================================================================================================================== 00:24:12.558 [2024-10-17T17:31:36.342Z] Total : 11471.36 44.81 0.00 0.00 11116.91 2356.18 8925.38 00:24:12.558 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:12.558 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:12.817 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.077 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:13.077 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:13.405 19:31:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.405 19:31:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2196510 ']' 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2196510' 00:24:16.758 killing process with pid 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2196510 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:16.758 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:17.018 rmmod nvme_tcp 00:24:17.018 rmmod nvme_fabrics 00:24:17.018 rmmod nvme_keyring 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 2193491 ']' 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 2193491 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2193491 ']' 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2193491 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.018 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2193491 00:24:17.276 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:17.277 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:17.277 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2193491' 00:24:17.277 killing process with pid 2193491 00:24:17.277 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2193491 00:24:17.277 19:31:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2193491 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.277 19:31:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.814 00:24:19.814 real 0m38.025s 00:24:19.814 user 2m0.273s 00:24:19.814 sys 0m7.947s 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:19.814 ************************************ 00:24:19.814 END TEST nvmf_failover 00:24:19.814 ************************************ 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.814 ************************************ 00:24:19.814 START TEST nvmf_host_discovery 00:24:19.814 ************************************ 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:19.814 * Looking for test storage... 00:24:19.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.814 --rc genhtml_branch_coverage=1 00:24:19.814 --rc genhtml_function_coverage=1 00:24:19.814 --rc genhtml_legend=1 00:24:19.814 --rc geninfo_all_blocks=1 00:24:19.814 --rc geninfo_unexecuted_blocks=1 00:24:19.814 00:24:19.814 ' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.814 --rc genhtml_branch_coverage=1 00:24:19.814 --rc genhtml_function_coverage=1 00:24:19.814 --rc genhtml_legend=1 00:24:19.814 --rc geninfo_all_blocks=1 00:24:19.814 --rc geninfo_unexecuted_blocks=1 00:24:19.814 00:24:19.814 ' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.814 --rc genhtml_branch_coverage=1 00:24:19.814 --rc genhtml_function_coverage=1 00:24:19.814 --rc genhtml_legend=1 00:24:19.814 --rc geninfo_all_blocks=1 00:24:19.814 --rc geninfo_unexecuted_blocks=1 00:24:19.814 00:24:19.814 ' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:19.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.814 --rc genhtml_branch_coverage=1 00:24:19.814 --rc genhtml_function_coverage=1 00:24:19.814 --rc genhtml_legend=1 00:24:19.814 --rc geninfo_all_blocks=1 00:24:19.814 --rc geninfo_unexecuted_blocks=1 00:24:19.814 00:24:19.814 ' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.814 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.815 19:31:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:26.386 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:26.386 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:26.386 Found net devices under 0000:86:00.0: cvl_0_0 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:26.386 Found net devices under 0000:86:00.1: cvl_0_1 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.386 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.387 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.387 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.387 19:31:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:24:26.387 00:24:26.387 --- 10.0.0.2 ping statistics --- 00:24:26.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.387 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:26.387 00:24:26.387 --- 10.0.0.1 ping statistics --- 00:24:26.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.387 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # return 0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # nvmfpid=2201885 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # waitforlisten 2201885 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2201885 ']' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 [2024-10-17 19:31:49.331957] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:24:26.387 [2024-10-17 19:31:49.332006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.387 [2024-10-17 19:31:49.410303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.387 [2024-10-17 19:31:49.449980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.387 [2024-10-17 19:31:49.450016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.387 [2024-10-17 19:31:49.450023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.387 [2024-10-17 19:31:49.450029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.387 [2024-10-17 19:31:49.450035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.387 [2024-10-17 19:31:49.450593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 [2024-10-17 19:31:49.594021] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 [2024-10-17 19:31:49.606218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 null0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 null1 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2201910 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2201910 /tmp/host.sock 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2201910 ']' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:26.387 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 [2024-10-17 19:31:49.680707] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:24:26.387 [2024-10-17 19:31:49.680748] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201910 ] 00:24:26.387 [2024-10-17 19:31:49.754499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.387 [2024-10-17 19:31:49.799243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.387 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.388 19:31:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.388 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.647 [2024-10-17 19:31:50.227831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.647 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:26.648 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:26.907 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:24:26.907 19:31:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:27.476 [2024-10-17 19:31:50.962110] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:27.476 [2024-10-17 19:31:50.962133] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:27.476 [2024-10-17 19:31:50.962146] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:27.476 [2024-10-17 19:31:51.090520] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:27.476 [2024-10-17 19:31:51.153201] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:27.476 [2024-10-17 19:31:51.153219] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:27.734 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:27.735 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:27.994 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.254 [2024-10-17 19:31:51.916351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:28.254 [2024-10-17 19:31:51.916956] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:28.254 [2024-10-17 19:31:51.916983] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:28.254 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.255 19:31:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:28.255 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.514 [2024-10-17 19:31:52.043621] bdev_nvme.c:7077:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:28.514 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:28.514 19:31:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:24:28.514 [2024-10-17 19:31:52.148365] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:28.514 [2024-10-17 19:31:52.148384] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:28.514 [2024-10-17 19:31:52.148389] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:29.452 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.453 [2024-10-17 19:31:53.176251] bdev_nvme.c:7135:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:29.453 [2024-10-17 19:31:53.176275] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:29.453 [2024-10-17 19:31:53.184977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.453 [2024-10-17 19:31:53.184997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.453 [2024-10-17 19:31:53.185007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.453 [2024-10-17 19:31:53.185013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.453 [2024-10-17 19:31:53.185021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.453 [2024-10-17 19:31:53.185028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.453 [2024-10-17 19:31:53.185035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.453 [2024-10-17 19:31:53.185042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.453 [2024-10-17 19:31:53.185049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:29.453 [2024-10-17 19:31:53.194989] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.453 [2024-10-17 19:31:53.205025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.453 [2024-10-17 19:31:53.205367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.453 [2024-10-17 19:31:53.205382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.453 [2024-10-17 19:31:53.205391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.453 [2024-10-17 19:31:53.205403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.453 [2024-10-17 19:31:53.205420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.453 [2024-10-17 19:31:53.205428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.453 [2024-10-17 19:31:53.205437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.453 [2024-10-17 19:31:53.205448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.453 [2024-10-17 19:31:53.215080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.453 [2024-10-17 19:31:53.215203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.453 [2024-10-17 19:31:53.215215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.453 [2024-10-17 19:31:53.215222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.453 [2024-10-17 19:31:53.215233] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.453 [2024-10-17 19:31:53.215243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.453 [2024-10-17 19:31:53.215250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.453 [2024-10-17 19:31:53.215257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.453 [2024-10-17 19:31:53.215266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.453 [2024-10-17 19:31:53.225131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.453 [2024-10-17 19:31:53.225350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.453 [2024-10-17 19:31:53.225364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.453 [2024-10-17 19:31:53.225372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.453 [2024-10-17 19:31:53.225383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.453 [2024-10-17 19:31:53.225393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.453 [2024-10-17 19:31:53.225399] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.453 [2024-10-17 19:31:53.225406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.453 [2024-10-17 19:31:53.225416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.453 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.453 [2024-10-17 19:31:53.235185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.453 [2024-10-17 19:31:53.235292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.453 [2024-10-17 19:31:53.235304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.454 [2024-10-17 19:31:53.235311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.454 [2024-10-17 19:31:53.235321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.454 [2024-10-17 19:31:53.235331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.454 [2024-10-17 19:31:53.235337] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.454 [2024-10-17 19:31:53.235345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.454 [2024-10-17 19:31:53.235355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.713 [2024-10-17 19:31:53.245239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.713 [2024-10-17 19:31:53.245479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.713 [2024-10-17 19:31:53.245493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.713 [2024-10-17 19:31:53.245501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.713 [2024-10-17 19:31:53.245512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.713 [2024-10-17 19:31:53.245523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.714 [2024-10-17 19:31:53.245530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.714 [2024-10-17 19:31:53.245537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.714 [2024-10-17 19:31:53.245546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.714 [2024-10-17 19:31:53.255294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:29.714 [2024-10-17 19:31:53.255528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.714 [2024-10-17 19:31:53.255541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22eb450 with addr=10.0.0.2, port=4420 00:24:29.714 [2024-10-17 19:31:53.255549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22eb450 is same with the state(6) to be set 00:24:29.714 [2024-10-17 19:31:53.255563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22eb450 (9): Bad file descriptor 00:24:29.714 [2024-10-17 19:31:53.255578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:29.714 [2024-10-17 19:31:53.255585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:29.714 [2024-10-17 19:31:53.255592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:29.714 [2024-10-17 19:31:53.255607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.714 [2024-10-17 19:31:53.263143] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:29.714 [2024-10-17 19:31:53.263160] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:29.714 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:29.974 19:31:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.910 [2024-10-17 19:31:54.591006] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:30.910 [2024-10-17 19:31:54.591023] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:30.910 [2024-10-17 19:31:54.591035] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:31.170 [2024-10-17 19:31:54.717421] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:31.170 [2024-10-17 19:31:54.778157] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:31.170 [2024-10-17 19:31:54.778183] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.170 request: 00:24:31.170 { 00:24:31.170 "name": "nvme", 00:24:31.170 "trtype": "tcp", 00:24:31.170 "traddr": "10.0.0.2", 00:24:31.170 "adrfam": "ipv4", 00:24:31.170 "trsvcid": "8009", 00:24:31.170 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:31.170 "wait_for_attach": true, 00:24:31.170 "method": "bdev_nvme_start_discovery", 00:24:31.170 "req_id": 1 00:24:31.170 } 00:24:31.170 Got JSON-RPC error response 00:24:31.170 response: 00:24:31.170 { 00:24:31.170 "code": -17, 00:24:31.170 "message": "File exists" 00:24:31.170 } 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.170 request: 00:24:31.170 { 00:24:31.170 "name": "nvme_second", 00:24:31.170 "trtype": "tcp", 00:24:31.170 "traddr": "10.0.0.2", 00:24:31.170 "adrfam": "ipv4", 00:24:31.170 "trsvcid": "8009", 00:24:31.170 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:31.170 "wait_for_attach": true, 00:24:31.170 "method": "bdev_nvme_start_discovery", 00:24:31.170 "req_id": 1 00:24:31.170 } 00:24:31.170 Got JSON-RPC error response 00:24:31.170 response: 00:24:31.170 { 00:24:31.170 "code": -17, 00:24:31.170 "message": "File exists" 00:24:31.170 } 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:31.170 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:31.430 19:31:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.430 19:31:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.366 [2024-10-17 19:31:56.018013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:32.366 [2024-10-17 19:31:56.018040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e8890 with addr=10.0.0.2, port=8010 00:24:32.366 [2024-10-17 19:31:56.018051] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:32.366 [2024-10-17 19:31:56.018058] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:32.366 [2024-10-17 19:31:56.018065] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:33.302 [2024-10-17 19:31:57.020448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.302 [2024-10-17 19:31:57.020472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22e8890 with addr=10.0.0.2, port=8010 00:24:33.302 [2024-10-17 19:31:57.020483] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:33.302 [2024-10-17 19:31:57.020489] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:33.302 [2024-10-17 19:31:57.020496] bdev_nvme.c:7221:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:34.239 [2024-10-17 19:31:58.022644] bdev_nvme.c:7196:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:34.498 request: 00:24:34.498 { 00:24:34.498 "name": "nvme_second", 00:24:34.498 "trtype": "tcp", 00:24:34.498 "traddr": "10.0.0.2", 00:24:34.498 "adrfam": "ipv4", 00:24:34.498 "trsvcid": "8010", 00:24:34.498 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:34.498 "wait_for_attach": false, 00:24:34.498 "attach_timeout_ms": 3000, 00:24:34.498 "method": "bdev_nvme_start_discovery", 00:24:34.498 "req_id": 1 00:24:34.498 } 00:24:34.498 Got JSON-RPC error response 00:24:34.498 response: 00:24:34.498 { 00:24:34.498 "code": -110, 00:24:34.498 "message": "Connection timed out" 00:24:34.498 } 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:34.498 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2201910 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.499 rmmod nvme_tcp 00:24:34.499 rmmod nvme_fabrics 00:24:34.499 rmmod nvme_keyring 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@515 -- # '[' -n 2201885 ']' 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # killprocess 2201885 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2201885 ']' 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2201885 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2201885 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2201885' 00:24:34.499 killing process with pid 2201885 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2201885 00:24:34.499 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2201885 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-save 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.758 19:31:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.664 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.664 00:24:36.664 real 0m17.260s 00:24:36.664 user 0m20.636s 00:24:36.664 sys 0m5.821s 00:24:36.664 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:36.664 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:36.664 ************************************ 00:24:36.664 END TEST nvmf_host_discovery 00:24:36.664 ************************************ 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.923 ************************************ 00:24:36.923 START TEST nvmf_host_multipath_status 00:24:36.923 ************************************ 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:36.923 * Looking for test storage... 00:24:36.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:36.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.923 --rc genhtml_branch_coverage=1 00:24:36.923 --rc genhtml_function_coverage=1 00:24:36.923 --rc genhtml_legend=1 00:24:36.923 --rc geninfo_all_blocks=1 00:24:36.923 --rc geninfo_unexecuted_blocks=1 00:24:36.923 00:24:36.923 ' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:36.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.923 --rc genhtml_branch_coverage=1 00:24:36.923 --rc genhtml_function_coverage=1 00:24:36.923 --rc genhtml_legend=1 00:24:36.923 --rc geninfo_all_blocks=1 00:24:36.923 --rc geninfo_unexecuted_blocks=1 00:24:36.923 00:24:36.923 ' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:36.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.923 --rc genhtml_branch_coverage=1 00:24:36.923 --rc genhtml_function_coverage=1 00:24:36.923 --rc genhtml_legend=1 00:24:36.923 --rc geninfo_all_blocks=1 00:24:36.923 --rc geninfo_unexecuted_blocks=1 00:24:36.923 00:24:36.923 ' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:36.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.923 --rc genhtml_branch_coverage=1 00:24:36.923 --rc genhtml_function_coverage=1 00:24:36.923 --rc genhtml_legend=1 00:24:36.923 --rc geninfo_all_blocks=1 00:24:36.923 --rc geninfo_unexecuted_blocks=1 00:24:36.923 00:24:36.923 ' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.923 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.924 19:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.494 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.495 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.495 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.495 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.495 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:43.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:24:43.495 00:24:43.495 --- 10.0.0.2 ping statistics --- 00:24:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.495 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:43.495 00:24:43.495 --- 10.0.0.1 ping statistics --- 00:24:43.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.495 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=2206983 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 2206983 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2206983 ']' 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:43.495 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.496 [2024-10-17 19:32:06.689399] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:24:43.496 [2024-10-17 19:32:06.689442] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.496 [2024-10-17 19:32:06.769070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:43.496 [2024-10-17 19:32:06.810144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.496 [2024-10-17 19:32:06.810180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.496 [2024-10-17 19:32:06.810187] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.496 [2024-10-17 19:32:06.810193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.496 [2024-10-17 19:32:06.810198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.496 [2024-10-17 19:32:06.811375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.496 [2024-10-17 19:32:06.811377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2206983 00:24:43.496 19:32:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:43.496 [2024-10-17 19:32:07.107473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.496 19:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:43.753 Malloc0 00:24:43.754 19:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:44.012 19:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.012 19:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.270 [2024-10-17 19:32:07.943459] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.270 19:32:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.530 [2024-10-17 19:32:08.143974] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2207236 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2207236 /var/tmp/bdevperf.sock 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2207236 ']' 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:44.530 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:44.789 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:44.789 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:44.789 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:45.048 19:32:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:45.307 Nvme0n1 00:24:45.307 19:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:45.875 Nvme0n1 00:24:45.875 19:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:45.875 19:32:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:47.779 19:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:47.779 19:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:48.037 19:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:48.295 19:32:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:49.232 19:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:49.232 19:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:49.232 19:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.232 19:32:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:49.520 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.520 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:49.520 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.520 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:49.780 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.038 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.039 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:50.039 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.039 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:50.297 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.297 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:50.297 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:50.297 19:32:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:50.556 19:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:50.556 19:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:50.556 19:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:50.814 19:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:50.814 19:32:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.192 19:32:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.451 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:52.710 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.710 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:52.710 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:52.710 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.969 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:52.969 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:52.969 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:52.969 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:53.228 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:53.228 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:53.228 19:32:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:53.486 19:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:53.486 19:32:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:54.863 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.122 19:32:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:55.381 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.381 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:55.381 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.381 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:55.639 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.639 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:55.639 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:55.639 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:55.898 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:55.898 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:55.898 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:56.158 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:56.417 19:32:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:57.352 19:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:57.352 19:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:57.352 19:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.352 19:32:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.611 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:57.870 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:57.870 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:57.870 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:57.870 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:58.129 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.129 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:58.129 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.129 19:32:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:58.387 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:58.387 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:58.387 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:58.387 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:58.645 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:58.645 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:58.645 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:58.645 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:58.903 19:32:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:59.839 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:59.839 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:59.839 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:59.839 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:00.098 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.098 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:00.098 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.098 19:32:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:00.357 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.357 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:00.357 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.357 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:00.615 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.615 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:00.615 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:00.615 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.873 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:00.874 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.132 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.132 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:01.132 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:01.390 19:32:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:01.649 19:32:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:02.587 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:02.587 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:02.587 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.587 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.846 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.846 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:02.846 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.847 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.847 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.847 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.847 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.847 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:03.106 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.106 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:03.106 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.106 19:32:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:03.380 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.380 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:03.380 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.380 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.665 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:03.939 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:03.939 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:04.204 19:32:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:04.464 19:32:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:05.400 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:05.400 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:05.400 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.400 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.659 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.659 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:05.659 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.659 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.918 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.177 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.177 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.177 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.177 19:32:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.436 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.436 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:06.436 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.436 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.695 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.695 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:06.695 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:06.954 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:06.955 19:32:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.333 19:32:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.592 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.850 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.850 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.850 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.850 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.109 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.109 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:09.109 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.109 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.368 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.368 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:09.368 19:32:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:09.626 19:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:09.626 19:32:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.003 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.260 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.260 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.260 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.260 19:32:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.518 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.518 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:11.518 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.518 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.777 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.778 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:11.778 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.778 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.036 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:12.036 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:12.036 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:12.036 19:32:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.295 19:32:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:13.681 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:13.681 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:13.681 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.681 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.681 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.682 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.682 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.682 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.682 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.682 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.941 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:14.200 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.200 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:14.200 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.200 19:32:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:14.459 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.459 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:14.459 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:14.459 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2207236 ']' 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2207236' 00:25:14.718 killing process with pid 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2207236 00:25:14.718 { 00:25:14.718 "results": [ 00:25:14.718 { 00:25:14.718 "job": "Nvme0n1", 00:25:14.718 "core_mask": "0x4", 00:25:14.718 "workload": "verify", 00:25:14.718 "status": "terminated", 00:25:14.718 "verify_range": { 00:25:14.718 "start": 0, 00:25:14.718 "length": 16384 00:25:14.718 }, 00:25:14.718 "queue_depth": 128, 00:25:14.718 "io_size": 4096, 00:25:14.718 "runtime": 28.714601, 00:25:14.718 "iops": 10624.176877819058, 00:25:14.718 "mibps": 41.500690928980696, 00:25:14.718 "io_failed": 0, 00:25:14.718 "io_timeout": 0, 00:25:14.718 "avg_latency_us": 12028.815127656522, 00:25:14.718 "min_latency_us": 795.7942857142857, 00:25:14.718 "max_latency_us": 3083812.083809524 00:25:14.718 } 00:25:14.718 ], 00:25:14.718 "core_count": 1 00:25:14.718 } 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2207236 00:25:14.718 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:14.999 [2024-10-17 19:32:08.210271] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:25:14.999 [2024-10-17 19:32:08.210326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207236 ] 00:25:14.999 [2024-10-17 19:32:08.287259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.999 [2024-10-17 19:32:08.327392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.999 Running I/O for 90 seconds... 00:25:14.999 11378.00 IOPS, 44.45 MiB/s [2024-10-17T17:32:38.783Z] 11330.50 IOPS, 44.26 MiB/s [2024-10-17T17:32:38.783Z] 11426.33 IOPS, 44.63 MiB/s [2024-10-17T17:32:38.783Z] 11442.50 IOPS, 44.70 MiB/s [2024-10-17T17:32:38.783Z] 11455.80 IOPS, 44.75 MiB/s [2024-10-17T17:32:38.783Z] 11465.50 IOPS, 44.79 MiB/s [2024-10-17T17:32:38.783Z] 11483.00 IOPS, 44.86 MiB/s [2024-10-17T17:32:38.783Z] 11482.50 IOPS, 44.85 MiB/s [2024-10-17T17:32:38.783Z] 11488.89 IOPS, 44.88 MiB/s [2024-10-17T17:32:38.783Z] 11496.00 IOPS, 44.91 MiB/s [2024-10-17T17:32:38.783Z] 11491.73 IOPS, 44.89 MiB/s [2024-10-17T17:32:38.783Z] 11488.50 IOPS, 44.88 MiB/s [2024-10-17T17:32:38.783Z] [2024-10-17 19:32:22.388901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.388942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:14.999 [2024-10-17 19:32:22.388961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.388969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:14.999 [2024-10-17 19:32:22.388982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.388989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:14.999 [2024-10-17 19:32:22.389002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.389009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:14.999 [2024-10-17 19:32:22.389021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.389028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:14.999 [2024-10-17 19:32:22.389040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:14.999 [2024-10-17 19:32:22.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.389987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.389994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.390013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.390032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.390050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.390068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.000 [2024-10-17 19:32:22.390087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.000 [2024-10-17 19:32:22.390099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.390995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.001 [2024-10-17 19:32:22.391119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.001 [2024-10-17 19:32:22.391131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.002 [2024-10-17 19:32:22.391631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.002 [2024-10-17 19:32:22.391856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.002 [2024-10-17 19:32:22.391863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.391875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.391882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.391894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.391900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.391913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.391920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.391932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.391940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.391952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.391959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.003 [2024-10-17 19:32:22.392463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.392992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.392999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.393010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.393017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.393031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.393038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.393050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.393058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.393070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.393077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.003 [2024-10-17 19:32:22.393090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.003 [2024-10-17 19:32:22.393097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.393338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.393345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.403644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.403656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.403669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.403677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.004 [2024-10-17 19:32:22.404419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.004 [2024-10-17 19:32:22.404427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.404985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.404992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.405016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.405035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.405055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.405075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.005 [2024-10-17 19:32:22.405094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.005 [2024-10-17 19:32:22.405115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.005 [2024-10-17 19:32:22.405134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.005 [2024-10-17 19:32:22.405148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.405414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.405730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.405739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.406388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.006 [2024-10-17 19:32:22.406411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.006 [2024-10-17 19:32:22.406589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.006 [2024-10-17 19:32:22.406596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.406989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.007 [2024-10-17 19:32:22.407253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.007 [2024-10-17 19:32:22.407265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.407983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.407993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.008 [2024-10-17 19:32:22.408703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.008 [2024-10-17 19:32:22.408720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.408730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.408747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.408773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.408783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.408801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.408810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.415630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.415980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.415990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.416018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.416045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.009 [2024-10-17 19:32:22.416072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.416099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.416128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.009 [2024-10-17 19:32:22.416146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.009 [2024-10-17 19:32:22.416155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.010 [2024-10-17 19:32:22.416563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.416975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.416994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.010 [2024-10-17 19:32:22.417194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.010 [2024-10-17 19:32:22.417211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.417795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.417805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.011 [2024-10-17 19:32:22.419705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.011 [2024-10-17 19:32:22.419718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.419972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.419995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.420969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.420992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.012 [2024-10-17 19:32:22.421444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.012 [2024-10-17 19:32:22.421660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.012 [2024-10-17 19:32:22.421673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.421970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.421993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.013 [2024-10-17 19:32:22.422044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.013 [2024-10-17 19:32:22.422712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.013 [2024-10-17 19:32:22.422748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.013 [2024-10-17 19:32:22.422785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.013 [2024-10-17 19:32:22.422821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.013 [2024-10-17 19:32:22.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.013 [2024-10-17 19:32:22.422857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.422880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.422893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.422916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.422930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.422954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.422968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.422991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.423972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.423987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.424190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.424203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.014 [2024-10-17 19:32:22.425508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.014 [2024-10-17 19:32:22.425530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.425971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.425984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.426850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.426863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.015 [2024-10-17 19:32:22.427510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.015 [2024-10-17 19:32:22.427531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.427972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.427986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.428022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.428058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.428095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.016 [2024-10-17 19:32:22.428585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.016 [2024-10-17 19:32:22.428737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.016 [2024-10-17 19:32:22.428752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.428990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.428998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-10-17 19:32:22.429022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.017 [2024-10-17 19:32:22.429593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.017 [2024-10-17 19:32:22.429612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.429978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.429987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.430976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.430992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.018 [2024-10-17 19:32:22.431197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.018 [2024-10-17 19:32:22.431213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.431732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.431741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.019 [2024-10-17 19:32:22.432557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.019 [2024-10-17 19:32:22.432572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.019 [2024-10-17 19:32:22.432581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.020 [2024-10-17 19:32:22.432957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.432981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.432996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.020 [2024-10-17 19:32:22.433375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.020 [2024-10-17 19:32:22.433397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.020 [2024-10-17 19:32:22.433420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.020 [2024-10-17 19:32:22.433443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.020 [2024-10-17 19:32:22.433458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.433465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.433480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.433488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.021 [2024-10-17 19:32:22.434843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.021 [2024-10-17 19:32:22.434859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.434882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.434907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.434931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.434955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.434978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.434987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.022 [2024-10-17 19:32:22.435578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.022 [2024-10-17 19:32:22.435588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.435769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.023 [2024-10-17 19:32:22.436958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.436982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.436999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.023 [2024-10-17 19:32:22.437226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.023 [2024-10-17 19:32:22.437236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.437363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.024 [2024-10-17 19:32:22.437819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.437868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.437894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.437909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.437918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.024 [2024-10-17 19:32:22.438703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.024 [2024-10-17 19:32:22.438719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.438981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.438997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.025 [2024-10-17 19:32:22.439607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.025 [2024-10-17 19:32:22.439620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.439979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.439987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.026 [2024-10-17 19:32:22.440977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.026 [2024-10-17 19:32:22.440984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.440996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.027 [2024-10-17 19:32:22.441452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.027 [2024-10-17 19:32:22.441656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.027 [2024-10-17 19:32:22.441664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.028 [2024-10-17 19:32:22.441833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.441854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.441876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.441890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.441898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.442984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.028 [2024-10-17 19:32:22.442991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.028 [2024-10-17 19:32:22.443004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.029 [2024-10-17 19:32:22.443725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.029 [2024-10-17 19:32:22.443740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.443763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.443782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.443802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.443822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.443846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.443855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.030 [2024-10-17 19:32:22.444850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.030 [2024-10-17 19:32:22.444871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.030 [2024-10-17 19:32:22.444893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.030 [2024-10-17 19:32:22.444915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.030 [2024-10-17 19:32:22.444936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.030 [2024-10-17 19:32:22.444950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.030 [2024-10-17 19:32:22.444958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.444972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.444980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.444992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.445190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.031 [2024-10-17 19:32:22.445569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.445591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.445614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.445627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.445635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.031 [2024-10-17 19:32:22.446255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.031 [2024-10-17 19:32:22.446262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.032 [2024-10-17 19:32:22.446911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.032 [2024-10-17 19:32:22.446918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.446931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.446939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.446953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.446964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.446978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.446985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.446997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.447456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.447464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.033 [2024-10-17 19:32:22.448304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.033 [2024-10-17 19:32:22.448311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.034 [2024-10-17 19:32:22.448949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.448981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.448990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.449004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.449012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.449025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.449032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.449044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.449053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.449068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.449077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.449091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.449098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.452780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.452794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.452801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:15.034 [2024-10-17 19:32:22.452815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.034 [2024-10-17 19:32:22.452824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.452987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.035 [2024-10-17 19:32:22.452994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.035 [2024-10-17 19:32:22.453912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.035 [2024-10-17 19:32:22.453927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.453935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.453950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.453958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.453975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.453983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.453999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.036 [2024-10-17 19:32:22.454712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.036 [2024-10-17 19:32:22.454727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:22.454735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:22.454751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:22.454761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:22.454882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:22.454891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.037 11318.38 IOPS, 44.21 MiB/s [2024-10-17T17:32:38.821Z] 10509.93 IOPS, 41.05 MiB/s [2024-10-17T17:32:38.821Z] 9809.27 IOPS, 38.32 MiB/s [2024-10-17T17:32:38.821Z] 9293.06 IOPS, 36.30 MiB/s [2024-10-17T17:32:38.821Z] 9415.53 IOPS, 36.78 MiB/s [2024-10-17T17:32:38.821Z] 9523.17 IOPS, 37.20 MiB/s [2024-10-17T17:32:38.821Z] 9708.58 IOPS, 37.92 MiB/s [2024-10-17T17:32:38.821Z] 9893.80 IOPS, 38.65 MiB/s [2024-10-17T17:32:38.821Z] 10052.38 IOPS, 39.27 MiB/s [2024-10-17T17:32:38.821Z] 10117.77 IOPS, 39.52 MiB/s [2024-10-17T17:32:38.821Z] 10176.78 IOPS, 39.75 MiB/s [2024-10-17T17:32:38.821Z] 10255.25 IOPS, 40.06 MiB/s [2024-10-17T17:32:38.821Z] 10377.04 IOPS, 40.54 MiB/s [2024-10-17T17:32:38.821Z] 10497.54 IOPS, 41.01 MiB/s [2024-10-17T17:32:38.821Z] [2024-10-17 19:32:35.999358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:127392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:127456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:127536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.037 [2024-10-17 19:32:35.999825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.037 [2024-10-17 19:32:35.999845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.037 [2024-10-17 19:32:35.999865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.037 [2024-10-17 19:32:35.999884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:35.999980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:35.999993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:36.000000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:36.000835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.037 [2024-10-17 19:32:36.000856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:36.000883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.037 [2024-10-17 19:32:36.000890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:15.037 [2024-10-17 19:32:36.000903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.000910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.000922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.000929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.000941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.000948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.000960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.000966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.000978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.000985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.000997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:15.038 [2024-10-17 19:32:36.001060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:15.038 [2024-10-17 19:32:36.001535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.038 [2024-10-17 19:32:36.001542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:15.038 10568.48 IOPS, 41.28 MiB/s [2024-10-17T17:32:38.822Z] 10604.32 IOPS, 41.42 MiB/s [2024-10-17T17:32:38.822Z] Received shutdown signal, test time was about 28.715232 seconds 00:25:15.038 00:25:15.038 Latency(us) 00:25:15.038 [2024-10-17T17:32:38.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.039 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:15.039 Verification LBA range: start 0x0 length 0x4000 00:25:15.039 Nvme0n1 : 28.71 10624.18 41.50 0.00 0.00 12028.82 795.79 3083812.08 00:25:15.039 [2024-10-17T17:32:38.823Z] =================================================================================================================== 00:25:15.039 [2024-10-17T17:32:38.823Z] Total : 10624.18 41.50 0.00 0.00 12028.82 795.79 3083812.08 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:15.039 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:15.039 rmmod nvme_tcp 00:25:15.039 rmmod nvme_fabrics 00:25:15.039 rmmod nvme_keyring 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 2206983 ']' 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 2206983 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2206983 ']' 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2206983 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2206983 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2206983' 00:25:15.298 killing process with pid 2206983 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2206983 00:25:15.298 19:32:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2206983 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-save 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@789 -- # iptables-restore 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.298 19:32:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:17.836 00:25:17.836 real 0m40.588s 00:25:17.836 user 1m49.619s 00:25:17.836 sys 0m11.749s 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:17.836 ************************************ 00:25:17.836 END TEST nvmf_host_multipath_status 00:25:17.836 ************************************ 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.836 ************************************ 00:25:17.836 START TEST nvmf_discovery_remove_ifc 00:25:17.836 ************************************ 00:25:17.836 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:17.836 * Looking for test storage... 00:25:17.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.837 --rc genhtml_branch_coverage=1 00:25:17.837 --rc genhtml_function_coverage=1 00:25:17.837 --rc genhtml_legend=1 00:25:17.837 --rc geninfo_all_blocks=1 00:25:17.837 --rc geninfo_unexecuted_blocks=1 00:25:17.837 00:25:17.837 ' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.837 --rc genhtml_branch_coverage=1 00:25:17.837 --rc genhtml_function_coverage=1 00:25:17.837 --rc genhtml_legend=1 00:25:17.837 --rc geninfo_all_blocks=1 00:25:17.837 --rc geninfo_unexecuted_blocks=1 00:25:17.837 00:25:17.837 ' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.837 --rc genhtml_branch_coverage=1 00:25:17.837 --rc genhtml_function_coverage=1 00:25:17.837 --rc genhtml_legend=1 00:25:17.837 --rc geninfo_all_blocks=1 00:25:17.837 --rc geninfo_unexecuted_blocks=1 00:25:17.837 00:25:17.837 ' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:17.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.837 --rc genhtml_branch_coverage=1 00:25:17.837 --rc genhtml_function_coverage=1 00:25:17.837 --rc genhtml_legend=1 00:25:17.837 --rc geninfo_all_blocks=1 00:25:17.837 --rc geninfo_unexecuted_blocks=1 00:25:17.837 00:25:17.837 ' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:17.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:17.837 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:17.838 19:32:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.408 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:24.409 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:24.409 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:24.409 Found net devices under 0000:86:00.0: cvl_0_0 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:24.409 Found net devices under 0000:86:00.1: cvl_0_1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # is_hw=yes 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:25:24.409 00:25:24.409 --- 10.0.0.2 ping statistics --- 00:25:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.409 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:25:24.409 00:25:24.409 --- 10.0.0.1 ping statistics --- 00:25:24.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.409 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # return 0 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:24.409 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # nvmfpid=2215783 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # waitforlisten 2215783 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2215783 ']' 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 [2024-10-17 19:32:47.370408] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:25:24.410 [2024-10-17 19:32:47.370449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.410 [2024-10-17 19:32:47.450683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.410 [2024-10-17 19:32:47.491051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.410 [2024-10-17 19:32:47.491087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.410 [2024-10-17 19:32:47.491095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.410 [2024-10-17 19:32:47.491101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.410 [2024-10-17 19:32:47.491107] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.410 [2024-10-17 19:32:47.491690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 [2024-10-17 19:32:47.635859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.410 [2024-10-17 19:32:47.644031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:24.410 null0 00:25:24.410 [2024-10-17 19:32:47.676027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2215948 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2215948 /tmp/host.sock 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2215948 ']' 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:24.410 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 [2024-10-17 19:32:47.742453] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:25:24.410 [2024-10-17 19:32:47.742494] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2215948 ] 00:25:24.410 [2024-10-17 19:32:47.815394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.410 [2024-10-17 19:32:47.857805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.410 19:32:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.347 [2024-10-17 19:32:49.032680] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:25.347 [2024-10-17 19:32:49.032702] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:25.347 [2024-10-17 19:32:49.032717] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.347 [2024-10-17 19:32:49.118971] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:25.605 [2024-10-17 19:32:49.337049] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:25.605 [2024-10-17 19:32:49.337093] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:25.605 [2024-10-17 19:32:49.337113] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:25.605 [2024-10-17 19:32:49.337125] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:25.605 [2024-10-17 19:32:49.337142] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.605 [2024-10-17 19:32:49.341695] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18eea50 was disconnected and freed. delete nvme_qpair. 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:25.605 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:25.864 19:32:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:26.802 19:32:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:28.180 19:32:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:29.114 19:32:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:30.049 19:32:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:30.985 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.244 [2024-10-17 19:32:54.778674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:31.244 [2024-10-17 19:32:54.778713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.244 [2024-10-17 19:32:54.778724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.244 [2024-10-17 19:32:54.778735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.244 [2024-10-17 19:32:54.778742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.244 [2024-10-17 19:32:54.778750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.244 [2024-10-17 19:32:54.778756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.244 [2024-10-17 19:32:54.778763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.244 [2024-10-17 19:32:54.778769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.244 [2024-10-17 19:32:54.778777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:31.244 [2024-10-17 19:32:54.778783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.244 [2024-10-17 19:32:54.778790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb2e0 is same with the state(6) to be set 00:25:31.244 [2024-10-17 19:32:54.788697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb2e0 (9): Bad file descriptor 00:25:31.244 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:31.244 19:32:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:31.244 [2024-10-17 19:32:54.798734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:32.180 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.180 [2024-10-17 19:32:55.837729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:32.180 [2024-10-17 19:32:55.837812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb2e0 with addr=10.0.0.2, port=4420 00:25:32.180 [2024-10-17 19:32:55.837847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb2e0 is same with the state(6) to be set 00:25:32.180 [2024-10-17 19:32:55.837902] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb2e0 (9): Bad file descriptor 00:25:32.180 [2024-10-17 19:32:55.838855] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:32.180 [2024-10-17 19:32:55.838922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:32.180 [2024-10-17 19:32:55.838946] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:32.180 [2024-10-17 19:32:55.838970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:32.180 [2024-10-17 19:32:55.839031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.181 [2024-10-17 19:32:55.839057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:32.181 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.181 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:32.181 19:32:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.118 [2024-10-17 19:32:56.841550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:33.118 [2024-10-17 19:32:56.841572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:33.118 [2024-10-17 19:32:56.841579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:33.118 [2024-10-17 19:32:56.841586] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:33.118 [2024-10-17 19:32:56.841621] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:33.118 [2024-10-17 19:32:56.841639] bdev_nvme.c:6904:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:33.118 [2024-10-17 19:32:56.841662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.118 [2024-10-17 19:32:56.841672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.118 [2024-10-17 19:32:56.841682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.118 [2024-10-17 19:32:56.841693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.118 [2024-10-17 19:32:56.841700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.118 [2024-10-17 19:32:56.841707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.118 [2024-10-17 19:32:56.841714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.118 [2024-10-17 19:32:56.841721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.118 [2024-10-17 19:32:56.841728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.118 [2024-10-17 19:32:56.841734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.118 [2024-10-17 19:32:56.841741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:33.118 [2024-10-17 19:32:56.842160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18ba9c0 (9): Bad file descriptor 00:25:33.118 [2024-10-17 19:32:56.843171] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:33.118 [2024-10-17 19:32:56.843182] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.118 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.377 19:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.377 19:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:33.377 19:32:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.377 19:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.377 19:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:33.377 19:32:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:34.312 19:32:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.246 [2024-10-17 19:32:58.898087] bdev_nvme.c:7153:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:35.246 [2024-10-17 19:32:58.898105] bdev_nvme.c:7239:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:35.246 [2024-10-17 19:32:58.898117] bdev_nvme.c:7116:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:35.246 [2024-10-17 19:32:58.984385] bdev_nvme.c:7082:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:35.511 [2024-10-17 19:32:59.040447] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:35.511 [2024-10-17 19:32:59.040483] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:35.511 [2024-10-17 19:32:59.040500] bdev_nvme.c:7949:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:35.511 [2024-10-17 19:32:59.040513] bdev_nvme.c:6972:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:35.511 [2024-10-17 19:32:59.040520] bdev_nvme.c:6931:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:35.511 [2024-10-17 19:32:59.046332] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18c6a30 was disconnected and freed. delete nvme_qpair. 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2215948 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2215948 ']' 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2215948 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2215948 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2215948' 00:25:35.511 killing process with pid 2215948 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2215948 00:25:35.511 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2215948 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.772 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.772 rmmod nvme_tcp 00:25:35.772 rmmod nvme_fabrics 00:25:35.773 rmmod nvme_keyring 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@515 -- # '[' -n 2215783 ']' 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # killprocess 2215783 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2215783 ']' 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2215783 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2215783 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2215783' 00:25:35.773 killing process with pid 2215783 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2215783 00:25:35.773 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2215783 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-save 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@789 -- # iptables-restore 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:36.032 19:32:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.937 19:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:37.937 00:25:37.937 real 0m20.547s 00:25:37.937 user 0m24.766s 00:25:37.937 sys 0m5.882s 00:25:37.937 19:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:37.937 19:33:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.937 ************************************ 00:25:37.937 END TEST nvmf_discovery_remove_ifc 00:25:37.937 ************************************ 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.197 ************************************ 00:25:38.197 START TEST nvmf_identify_kernel_target 00:25:38.197 ************************************ 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:38.197 * Looking for test storage... 00:25:38.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:38.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.197 --rc genhtml_branch_coverage=1 00:25:38.197 --rc genhtml_function_coverage=1 00:25:38.197 --rc genhtml_legend=1 00:25:38.197 --rc geninfo_all_blocks=1 00:25:38.197 --rc geninfo_unexecuted_blocks=1 00:25:38.197 00:25:38.197 ' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:38.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.197 --rc genhtml_branch_coverage=1 00:25:38.197 --rc genhtml_function_coverage=1 00:25:38.197 --rc genhtml_legend=1 00:25:38.197 --rc geninfo_all_blocks=1 00:25:38.197 --rc geninfo_unexecuted_blocks=1 00:25:38.197 00:25:38.197 ' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:38.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.197 --rc genhtml_branch_coverage=1 00:25:38.197 --rc genhtml_function_coverage=1 00:25:38.197 --rc genhtml_legend=1 00:25:38.197 --rc geninfo_all_blocks=1 00:25:38.197 --rc geninfo_unexecuted_blocks=1 00:25:38.197 00:25:38.197 ' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:38.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.197 --rc genhtml_branch_coverage=1 00:25:38.197 --rc genhtml_function_coverage=1 00:25:38.197 --rc genhtml_legend=1 00:25:38.197 --rc geninfo_all_blocks=1 00:25:38.197 --rc geninfo_unexecuted_blocks=1 00:25:38.197 00:25:38.197 ' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.197 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:38.198 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.457 19:33:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:45.047 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:45.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:45.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:45.048 Found net devices under 0000:86:00.0: cvl_0_0 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:45.048 Found net devices under 0000:86:00.1: cvl_0_1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:45.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:25:45.048 00:25:45.048 --- 10.0.0.2 ping statistics --- 00:25:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.048 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:25:45.048 00:25:45.048 --- 10.0.0.1 ping statistics --- 00:25:45.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.048 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:25:45.048 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:45.049 19:33:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:46.953 Waiting for block devices as requested 00:25:46.953 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:47.212 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:47.212 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:47.212 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:47.470 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:47.470 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:47.470 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:47.729 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:47.729 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:47.729 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:47.729 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:47.988 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:47.988 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:47.988 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:48.247 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:48.247 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:48.247 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:48.507 No valid GPT data, bailing 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:48.507 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo tcp 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:48.508 00:25:48.508 Discovery Log Number of Records 2, Generation counter 2 00:25:48.508 =====Discovery Log Entry 0====== 00:25:48.508 trtype: tcp 00:25:48.508 adrfam: ipv4 00:25:48.508 subtype: current discovery subsystem 00:25:48.508 treq: not specified, sq flow control disable supported 00:25:48.508 portid: 1 00:25:48.508 trsvcid: 4420 00:25:48.508 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:48.508 traddr: 10.0.0.1 00:25:48.508 eflags: none 00:25:48.508 sectype: none 00:25:48.508 =====Discovery Log Entry 1====== 00:25:48.508 trtype: tcp 00:25:48.508 adrfam: ipv4 00:25:48.508 subtype: nvme subsystem 00:25:48.508 treq: not specified, sq flow control disable supported 00:25:48.508 portid: 1 00:25:48.508 trsvcid: 4420 00:25:48.508 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:48.508 traddr: 10.0.0.1 00:25:48.508 eflags: none 00:25:48.508 sectype: none 00:25:48.508 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:48.508 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:48.768 ===================================================== 00:25:48.768 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:48.768 ===================================================== 00:25:48.768 Controller Capabilities/Features 00:25:48.768 ================================ 00:25:48.768 Vendor ID: 0000 00:25:48.768 Subsystem Vendor ID: 0000 00:25:48.768 Serial Number: 895efc97b5f01ee51255 00:25:48.768 Model Number: Linux 00:25:48.768 Firmware Version: 6.8.9-20 00:25:48.768 Recommended Arb Burst: 0 00:25:48.768 IEEE OUI Identifier: 00 00 00 00:25:48.768 Multi-path I/O 00:25:48.768 May have multiple subsystem ports: No 00:25:48.768 May have multiple controllers: No 00:25:48.768 Associated with SR-IOV VF: No 00:25:48.768 Max Data Transfer Size: Unlimited 00:25:48.768 Max Number of Namespaces: 0 00:25:48.768 Max Number of I/O Queues: 1024 00:25:48.768 NVMe Specification Version (VS): 1.3 00:25:48.768 NVMe Specification Version (Identify): 1.3 00:25:48.768 Maximum Queue Entries: 1024 00:25:48.768 Contiguous Queues Required: No 00:25:48.768 Arbitration Mechanisms Supported 00:25:48.768 Weighted Round Robin: Not Supported 00:25:48.768 Vendor Specific: Not Supported 00:25:48.768 Reset Timeout: 7500 ms 00:25:48.768 Doorbell Stride: 4 bytes 00:25:48.768 NVM Subsystem Reset: Not Supported 00:25:48.768 Command Sets Supported 00:25:48.768 NVM Command Set: Supported 00:25:48.768 Boot Partition: Not Supported 00:25:48.768 Memory Page Size Minimum: 4096 bytes 00:25:48.768 Memory Page Size Maximum: 4096 bytes 00:25:48.768 Persistent Memory Region: Not Supported 00:25:48.768 Optional Asynchronous Events Supported 00:25:48.768 Namespace Attribute Notices: Not Supported 00:25:48.768 Firmware Activation Notices: Not Supported 00:25:48.768 ANA Change Notices: Not Supported 00:25:48.768 PLE Aggregate Log Change Notices: Not Supported 00:25:48.768 LBA Status Info Alert Notices: Not Supported 00:25:48.768 EGE Aggregate Log Change Notices: Not Supported 00:25:48.768 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.768 Zone Descriptor Change Notices: Not Supported 00:25:48.768 Discovery Log Change Notices: Supported 00:25:48.768 Controller Attributes 00:25:48.768 128-bit Host Identifier: Not Supported 00:25:48.768 Non-Operational Permissive Mode: Not Supported 00:25:48.768 NVM Sets: Not Supported 00:25:48.768 Read Recovery Levels: Not Supported 00:25:48.768 Endurance Groups: Not Supported 00:25:48.768 Predictable Latency Mode: Not Supported 00:25:48.768 Traffic Based Keep ALive: Not Supported 00:25:48.768 Namespace Granularity: Not Supported 00:25:48.768 SQ Associations: Not Supported 00:25:48.769 UUID List: Not Supported 00:25:48.769 Multi-Domain Subsystem: Not Supported 00:25:48.769 Fixed Capacity Management: Not Supported 00:25:48.769 Variable Capacity Management: Not Supported 00:25:48.769 Delete Endurance Group: Not Supported 00:25:48.769 Delete NVM Set: Not Supported 00:25:48.769 Extended LBA Formats Supported: Not Supported 00:25:48.769 Flexible Data Placement Supported: Not Supported 00:25:48.769 00:25:48.769 Controller Memory Buffer Support 00:25:48.769 ================================ 00:25:48.769 Supported: No 00:25:48.769 00:25:48.769 Persistent Memory Region Support 00:25:48.769 ================================ 00:25:48.769 Supported: No 00:25:48.769 00:25:48.769 Admin Command Set Attributes 00:25:48.769 ============================ 00:25:48.769 Security Send/Receive: Not Supported 00:25:48.769 Format NVM: Not Supported 00:25:48.769 Firmware Activate/Download: Not Supported 00:25:48.769 Namespace Management: Not Supported 00:25:48.769 Device Self-Test: Not Supported 00:25:48.769 Directives: Not Supported 00:25:48.769 NVMe-MI: Not Supported 00:25:48.769 Virtualization Management: Not Supported 00:25:48.769 Doorbell Buffer Config: Not Supported 00:25:48.769 Get LBA Status Capability: Not Supported 00:25:48.769 Command & Feature Lockdown Capability: Not Supported 00:25:48.769 Abort Command Limit: 1 00:25:48.769 Async Event Request Limit: 1 00:25:48.769 Number of Firmware Slots: N/A 00:25:48.769 Firmware Slot 1 Read-Only: N/A 00:25:48.769 Firmware Activation Without Reset: N/A 00:25:48.769 Multiple Update Detection Support: N/A 00:25:48.769 Firmware Update Granularity: No Information Provided 00:25:48.769 Per-Namespace SMART Log: No 00:25:48.769 Asymmetric Namespace Access Log Page: Not Supported 00:25:48.769 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:48.769 Command Effects Log Page: Not Supported 00:25:48.769 Get Log Page Extended Data: Supported 00:25:48.769 Telemetry Log Pages: Not Supported 00:25:48.769 Persistent Event Log Pages: Not Supported 00:25:48.769 Supported Log Pages Log Page: May Support 00:25:48.769 Commands Supported & Effects Log Page: Not Supported 00:25:48.769 Feature Identifiers & Effects Log Page:May Support 00:25:48.769 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.769 Data Area 4 for Telemetry Log: Not Supported 00:25:48.769 Error Log Page Entries Supported: 1 00:25:48.769 Keep Alive: Not Supported 00:25:48.769 00:25:48.769 NVM Command Set Attributes 00:25:48.769 ========================== 00:25:48.769 Submission Queue Entry Size 00:25:48.769 Max: 1 00:25:48.769 Min: 1 00:25:48.769 Completion Queue Entry Size 00:25:48.769 Max: 1 00:25:48.769 Min: 1 00:25:48.769 Number of Namespaces: 0 00:25:48.769 Compare Command: Not Supported 00:25:48.769 Write Uncorrectable Command: Not Supported 00:25:48.769 Dataset Management Command: Not Supported 00:25:48.769 Write Zeroes Command: Not Supported 00:25:48.769 Set Features Save Field: Not Supported 00:25:48.769 Reservations: Not Supported 00:25:48.769 Timestamp: Not Supported 00:25:48.769 Copy: Not Supported 00:25:48.769 Volatile Write Cache: Not Present 00:25:48.769 Atomic Write Unit (Normal): 1 00:25:48.769 Atomic Write Unit (PFail): 1 00:25:48.769 Atomic Compare & Write Unit: 1 00:25:48.769 Fused Compare & Write: Not Supported 00:25:48.769 Scatter-Gather List 00:25:48.769 SGL Command Set: Supported 00:25:48.769 SGL Keyed: Not Supported 00:25:48.769 SGL Bit Bucket Descriptor: Not Supported 00:25:48.769 SGL Metadata Pointer: Not Supported 00:25:48.769 Oversized SGL: Not Supported 00:25:48.769 SGL Metadata Address: Not Supported 00:25:48.769 SGL Offset: Supported 00:25:48.769 Transport SGL Data Block: Not Supported 00:25:48.769 Replay Protected Memory Block: Not Supported 00:25:48.769 00:25:48.769 Firmware Slot Information 00:25:48.769 ========================= 00:25:48.769 Active slot: 0 00:25:48.769 00:25:48.769 00:25:48.769 Error Log 00:25:48.769 ========= 00:25:48.769 00:25:48.769 Active Namespaces 00:25:48.769 ================= 00:25:48.769 Discovery Log Page 00:25:48.769 ================== 00:25:48.769 Generation Counter: 2 00:25:48.769 Number of Records: 2 00:25:48.769 Record Format: 0 00:25:48.769 00:25:48.769 Discovery Log Entry 0 00:25:48.769 ---------------------- 00:25:48.769 Transport Type: 3 (TCP) 00:25:48.769 Address Family: 1 (IPv4) 00:25:48.769 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:48.769 Entry Flags: 00:25:48.769 Duplicate Returned Information: 0 00:25:48.769 Explicit Persistent Connection Support for Discovery: 0 00:25:48.769 Transport Requirements: 00:25:48.769 Secure Channel: Not Specified 00:25:48.769 Port ID: 1 (0x0001) 00:25:48.769 Controller ID: 65535 (0xffff) 00:25:48.769 Admin Max SQ Size: 32 00:25:48.769 Transport Service Identifier: 4420 00:25:48.769 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:48.769 Transport Address: 10.0.0.1 00:25:48.769 Discovery Log Entry 1 00:25:48.769 ---------------------- 00:25:48.769 Transport Type: 3 (TCP) 00:25:48.769 Address Family: 1 (IPv4) 00:25:48.769 Subsystem Type: 2 (NVM Subsystem) 00:25:48.769 Entry Flags: 00:25:48.769 Duplicate Returned Information: 0 00:25:48.769 Explicit Persistent Connection Support for Discovery: 0 00:25:48.769 Transport Requirements: 00:25:48.769 Secure Channel: Not Specified 00:25:48.769 Port ID: 1 (0x0001) 00:25:48.769 Controller ID: 65535 (0xffff) 00:25:48.769 Admin Max SQ Size: 32 00:25:48.769 Transport Service Identifier: 4420 00:25:48.769 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:48.769 Transport Address: 10.0.0.1 00:25:48.769 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:48.769 get_feature(0x01) failed 00:25:48.769 get_feature(0x02) failed 00:25:48.769 get_feature(0x04) failed 00:25:48.769 ===================================================== 00:25:48.769 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:48.769 ===================================================== 00:25:48.769 Controller Capabilities/Features 00:25:48.769 ================================ 00:25:48.769 Vendor ID: 0000 00:25:48.769 Subsystem Vendor ID: 0000 00:25:48.769 Serial Number: aba85b3ed90cf673d5fb 00:25:48.769 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:48.769 Firmware Version: 6.8.9-20 00:25:48.769 Recommended Arb Burst: 6 00:25:48.769 IEEE OUI Identifier: 00 00 00 00:25:48.769 Multi-path I/O 00:25:48.769 May have multiple subsystem ports: Yes 00:25:48.769 May have multiple controllers: Yes 00:25:48.769 Associated with SR-IOV VF: No 00:25:48.769 Max Data Transfer Size: Unlimited 00:25:48.769 Max Number of Namespaces: 1024 00:25:48.769 Max Number of I/O Queues: 128 00:25:48.769 NVMe Specification Version (VS): 1.3 00:25:48.769 NVMe Specification Version (Identify): 1.3 00:25:48.769 Maximum Queue Entries: 1024 00:25:48.769 Contiguous Queues Required: No 00:25:48.769 Arbitration Mechanisms Supported 00:25:48.769 Weighted Round Robin: Not Supported 00:25:48.769 Vendor Specific: Not Supported 00:25:48.769 Reset Timeout: 7500 ms 00:25:48.769 Doorbell Stride: 4 bytes 00:25:48.769 NVM Subsystem Reset: Not Supported 00:25:48.769 Command Sets Supported 00:25:48.769 NVM Command Set: Supported 00:25:48.769 Boot Partition: Not Supported 00:25:48.769 Memory Page Size Minimum: 4096 bytes 00:25:48.769 Memory Page Size Maximum: 4096 bytes 00:25:48.769 Persistent Memory Region: Not Supported 00:25:48.769 Optional Asynchronous Events Supported 00:25:48.769 Namespace Attribute Notices: Supported 00:25:48.769 Firmware Activation Notices: Not Supported 00:25:48.769 ANA Change Notices: Supported 00:25:48.769 PLE Aggregate Log Change Notices: Not Supported 00:25:48.769 LBA Status Info Alert Notices: Not Supported 00:25:48.769 EGE Aggregate Log Change Notices: Not Supported 00:25:48.769 Normal NVM Subsystem Shutdown event: Not Supported 00:25:48.769 Zone Descriptor Change Notices: Not Supported 00:25:48.769 Discovery Log Change Notices: Not Supported 00:25:48.769 Controller Attributes 00:25:48.769 128-bit Host Identifier: Supported 00:25:48.769 Non-Operational Permissive Mode: Not Supported 00:25:48.769 NVM Sets: Not Supported 00:25:48.769 Read Recovery Levels: Not Supported 00:25:48.769 Endurance Groups: Not Supported 00:25:48.769 Predictable Latency Mode: Not Supported 00:25:48.769 Traffic Based Keep ALive: Supported 00:25:48.769 Namespace Granularity: Not Supported 00:25:48.769 SQ Associations: Not Supported 00:25:48.769 UUID List: Not Supported 00:25:48.769 Multi-Domain Subsystem: Not Supported 00:25:48.769 Fixed Capacity Management: Not Supported 00:25:48.769 Variable Capacity Management: Not Supported 00:25:48.769 Delete Endurance Group: Not Supported 00:25:48.769 Delete NVM Set: Not Supported 00:25:48.769 Extended LBA Formats Supported: Not Supported 00:25:48.769 Flexible Data Placement Supported: Not Supported 00:25:48.769 00:25:48.769 Controller Memory Buffer Support 00:25:48.769 ================================ 00:25:48.769 Supported: No 00:25:48.769 00:25:48.769 Persistent Memory Region Support 00:25:48.769 ================================ 00:25:48.769 Supported: No 00:25:48.769 00:25:48.770 Admin Command Set Attributes 00:25:48.770 ============================ 00:25:48.770 Security Send/Receive: Not Supported 00:25:48.770 Format NVM: Not Supported 00:25:48.770 Firmware Activate/Download: Not Supported 00:25:48.770 Namespace Management: Not Supported 00:25:48.770 Device Self-Test: Not Supported 00:25:48.770 Directives: Not Supported 00:25:48.770 NVMe-MI: Not Supported 00:25:48.770 Virtualization Management: Not Supported 00:25:48.770 Doorbell Buffer Config: Not Supported 00:25:48.770 Get LBA Status Capability: Not Supported 00:25:48.770 Command & Feature Lockdown Capability: Not Supported 00:25:48.770 Abort Command Limit: 4 00:25:48.770 Async Event Request Limit: 4 00:25:48.770 Number of Firmware Slots: N/A 00:25:48.770 Firmware Slot 1 Read-Only: N/A 00:25:48.770 Firmware Activation Without Reset: N/A 00:25:48.770 Multiple Update Detection Support: N/A 00:25:48.770 Firmware Update Granularity: No Information Provided 00:25:48.770 Per-Namespace SMART Log: Yes 00:25:48.770 Asymmetric Namespace Access Log Page: Supported 00:25:48.770 ANA Transition Time : 10 sec 00:25:48.770 00:25:48.770 Asymmetric Namespace Access Capabilities 00:25:48.770 ANA Optimized State : Supported 00:25:48.770 ANA Non-Optimized State : Supported 00:25:48.770 ANA Inaccessible State : Supported 00:25:48.770 ANA Persistent Loss State : Supported 00:25:48.770 ANA Change State : Supported 00:25:48.770 ANAGRPID is not changed : No 00:25:48.770 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:48.770 00:25:48.770 ANA Group Identifier Maximum : 128 00:25:48.770 Number of ANA Group Identifiers : 128 00:25:48.770 Max Number of Allowed Namespaces : 1024 00:25:48.770 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:48.770 Command Effects Log Page: Supported 00:25:48.770 Get Log Page Extended Data: Supported 00:25:48.770 Telemetry Log Pages: Not Supported 00:25:48.770 Persistent Event Log Pages: Not Supported 00:25:48.770 Supported Log Pages Log Page: May Support 00:25:48.770 Commands Supported & Effects Log Page: Not Supported 00:25:48.770 Feature Identifiers & Effects Log Page:May Support 00:25:48.770 NVMe-MI Commands & Effects Log Page: May Support 00:25:48.770 Data Area 4 for Telemetry Log: Not Supported 00:25:48.770 Error Log Page Entries Supported: 128 00:25:48.770 Keep Alive: Supported 00:25:48.770 Keep Alive Granularity: 1000 ms 00:25:48.770 00:25:48.770 NVM Command Set Attributes 00:25:48.770 ========================== 00:25:48.770 Submission Queue Entry Size 00:25:48.770 Max: 64 00:25:48.770 Min: 64 00:25:48.770 Completion Queue Entry Size 00:25:48.770 Max: 16 00:25:48.770 Min: 16 00:25:48.770 Number of Namespaces: 1024 00:25:48.770 Compare Command: Not Supported 00:25:48.770 Write Uncorrectable Command: Not Supported 00:25:48.770 Dataset Management Command: Supported 00:25:48.770 Write Zeroes Command: Supported 00:25:48.770 Set Features Save Field: Not Supported 00:25:48.770 Reservations: Not Supported 00:25:48.770 Timestamp: Not Supported 00:25:48.770 Copy: Not Supported 00:25:48.770 Volatile Write Cache: Present 00:25:48.770 Atomic Write Unit (Normal): 1 00:25:48.770 Atomic Write Unit (PFail): 1 00:25:48.770 Atomic Compare & Write Unit: 1 00:25:48.770 Fused Compare & Write: Not Supported 00:25:48.770 Scatter-Gather List 00:25:48.770 SGL Command Set: Supported 00:25:48.770 SGL Keyed: Not Supported 00:25:48.770 SGL Bit Bucket Descriptor: Not Supported 00:25:48.770 SGL Metadata Pointer: Not Supported 00:25:48.770 Oversized SGL: Not Supported 00:25:48.770 SGL Metadata Address: Not Supported 00:25:48.770 SGL Offset: Supported 00:25:48.770 Transport SGL Data Block: Not Supported 00:25:48.770 Replay Protected Memory Block: Not Supported 00:25:48.770 00:25:48.770 Firmware Slot Information 00:25:48.770 ========================= 00:25:48.770 Active slot: 0 00:25:48.770 00:25:48.770 Asymmetric Namespace Access 00:25:48.770 =========================== 00:25:48.770 Change Count : 0 00:25:48.770 Number of ANA Group Descriptors : 1 00:25:48.770 ANA Group Descriptor : 0 00:25:48.770 ANA Group ID : 1 00:25:48.770 Number of NSID Values : 1 00:25:48.770 Change Count : 0 00:25:48.770 ANA State : 1 00:25:48.770 Namespace Identifier : 1 00:25:48.770 00:25:48.770 Commands Supported and Effects 00:25:48.770 ============================== 00:25:48.770 Admin Commands 00:25:48.770 -------------- 00:25:48.770 Get Log Page (02h): Supported 00:25:48.770 Identify (06h): Supported 00:25:48.770 Abort (08h): Supported 00:25:48.770 Set Features (09h): Supported 00:25:48.770 Get Features (0Ah): Supported 00:25:48.770 Asynchronous Event Request (0Ch): Supported 00:25:48.770 Keep Alive (18h): Supported 00:25:48.770 I/O Commands 00:25:48.770 ------------ 00:25:48.770 Flush (00h): Supported 00:25:48.770 Write (01h): Supported LBA-Change 00:25:48.770 Read (02h): Supported 00:25:48.770 Write Zeroes (08h): Supported LBA-Change 00:25:48.770 Dataset Management (09h): Supported 00:25:48.770 00:25:48.770 Error Log 00:25:48.770 ========= 00:25:48.770 Entry: 0 00:25:48.770 Error Count: 0x3 00:25:48.770 Submission Queue Id: 0x0 00:25:48.770 Command Id: 0x5 00:25:48.770 Phase Bit: 0 00:25:48.770 Status Code: 0x2 00:25:48.770 Status Code Type: 0x0 00:25:48.770 Do Not Retry: 1 00:25:48.770 Error Location: 0x28 00:25:48.770 LBA: 0x0 00:25:48.770 Namespace: 0x0 00:25:48.770 Vendor Log Page: 0x0 00:25:48.770 ----------- 00:25:48.770 Entry: 1 00:25:48.770 Error Count: 0x2 00:25:48.770 Submission Queue Id: 0x0 00:25:48.770 Command Id: 0x5 00:25:48.770 Phase Bit: 0 00:25:48.770 Status Code: 0x2 00:25:48.770 Status Code Type: 0x0 00:25:48.770 Do Not Retry: 1 00:25:48.770 Error Location: 0x28 00:25:48.770 LBA: 0x0 00:25:48.770 Namespace: 0x0 00:25:48.770 Vendor Log Page: 0x0 00:25:48.770 ----------- 00:25:48.770 Entry: 2 00:25:48.770 Error Count: 0x1 00:25:48.770 Submission Queue Id: 0x0 00:25:48.770 Command Id: 0x4 00:25:48.770 Phase Bit: 0 00:25:48.770 Status Code: 0x2 00:25:48.770 Status Code Type: 0x0 00:25:48.770 Do Not Retry: 1 00:25:48.770 Error Location: 0x28 00:25:48.770 LBA: 0x0 00:25:48.770 Namespace: 0x0 00:25:48.770 Vendor Log Page: 0x0 00:25:48.770 00:25:48.770 Number of Queues 00:25:48.770 ================ 00:25:48.770 Number of I/O Submission Queues: 128 00:25:48.770 Number of I/O Completion Queues: 128 00:25:48.770 00:25:48.770 ZNS Specific Controller Data 00:25:48.770 ============================ 00:25:48.770 Zone Append Size Limit: 0 00:25:48.770 00:25:48.770 00:25:48.770 Active Namespaces 00:25:48.770 ================= 00:25:48.770 get_feature(0x05) failed 00:25:48.770 Namespace ID:1 00:25:48.770 Command Set Identifier: NVM (00h) 00:25:48.770 Deallocate: Supported 00:25:48.770 Deallocated/Unwritten Error: Not Supported 00:25:48.770 Deallocated Read Value: Unknown 00:25:48.770 Deallocate in Write Zeroes: Not Supported 00:25:48.770 Deallocated Guard Field: 0xFFFF 00:25:48.770 Flush: Supported 00:25:48.770 Reservation: Not Supported 00:25:48.770 Namespace Sharing Capabilities: Multiple Controllers 00:25:48.770 Size (in LBAs): 3125627568 (1490GiB) 00:25:48.770 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:48.770 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:48.770 UUID: 182b047c-0045-4ad5-bd51-abefd0030c10 00:25:48.770 Thin Provisioning: Not Supported 00:25:48.770 Per-NS Atomic Units: Yes 00:25:48.770 Atomic Boundary Size (Normal): 0 00:25:48.770 Atomic Boundary Size (PFail): 0 00:25:48.770 Atomic Boundary Offset: 0 00:25:48.770 NGUID/EUI64 Never Reused: No 00:25:48.770 ANA group ID: 1 00:25:48.770 Namespace Write Protected: No 00:25:48.770 Number of LBA Formats: 1 00:25:48.770 Current LBA Format: LBA Format #00 00:25:48.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:48.770 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.770 rmmod nvme_tcp 00:25:48.770 rmmod nvme_fabrics 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:48.770 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-save 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@789 -- # iptables-restore 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.771 19:33:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.304 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:25:51.305 19:33:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:53.968 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:53.968 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:55.349 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:55.349 00:25:55.349 real 0m17.295s 00:25:55.349 user 0m4.357s 00:25:55.349 sys 0m8.746s 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:55.349 ************************************ 00:25:55.349 END TEST nvmf_identify_kernel_target 00:25:55.349 ************************************ 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:55.349 19:33:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.609 ************************************ 00:25:55.609 START TEST nvmf_auth_host 00:25:55.609 ************************************ 00:25:55.609 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:55.609 * Looking for test storage... 00:25:55.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.609 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.610 --rc genhtml_branch_coverage=1 00:25:55.610 --rc genhtml_function_coverage=1 00:25:55.610 --rc genhtml_legend=1 00:25:55.610 --rc geninfo_all_blocks=1 00:25:55.610 --rc geninfo_unexecuted_blocks=1 00:25:55.610 00:25:55.610 ' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.610 --rc genhtml_branch_coverage=1 00:25:55.610 --rc genhtml_function_coverage=1 00:25:55.610 --rc genhtml_legend=1 00:25:55.610 --rc geninfo_all_blocks=1 00:25:55.610 --rc geninfo_unexecuted_blocks=1 00:25:55.610 00:25:55.610 ' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.610 --rc genhtml_branch_coverage=1 00:25:55.610 --rc genhtml_function_coverage=1 00:25:55.610 --rc genhtml_legend=1 00:25:55.610 --rc geninfo_all_blocks=1 00:25:55.610 --rc geninfo_unexecuted_blocks=1 00:25:55.610 00:25:55.610 ' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:55.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.610 --rc genhtml_branch_coverage=1 00:25:55.610 --rc genhtml_function_coverage=1 00:25:55.610 --rc genhtml_legend=1 00:25:55.610 --rc geninfo_all_blocks=1 00:25:55.610 --rc geninfo_unexecuted_blocks=1 00:25:55.610 00:25:55.610 ' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:55.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:55.610 19:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:02.184 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:02.184 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:02.184 Found net devices under 0000:86:00.0: cvl_0_0 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:02.184 Found net devices under 0000:86:00.1: cvl_0_1 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.184 19:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.430 ms 00:26:02.184 00:26:02.184 --- 10.0.0.2 ping statistics --- 00:26:02.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.184 rtt min/avg/max/mdev = 0.430/0.430/0.430/0.000 ms 00:26:02.184 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:26:02.184 00:26:02.184 --- 10.0.0.1 ping statistics --- 00:26:02.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.185 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=2228306 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 2228306 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2228306 ']' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0627caac9980707e9e51cbb3c9e72d9e 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ms2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0627caac9980707e9e51cbb3c9e72d9e 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0627caac9980707e9e51cbb3c9e72d9e 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0627caac9980707e9e51cbb3c9e72d9e 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ms2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ms2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ms2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8575d14e1677e5d347651d4bd427387971815f8c77b5ce3760234f34b6d4c86e 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Jdk 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8575d14e1677e5d347651d4bd427387971815f8c77b5ce3760234f34b6d4c86e 3 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8575d14e1677e5d347651d4bd427387971815f8c77b5ce3760234f34b6d4c86e 3 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8575d14e1677e5d347651d4bd427387971815f8c77b5ce3760234f34b6d4c86e 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Jdk 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Jdk 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Jdk 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=f58ffbae1737425810cc228e7ff5741e43096ff0206bbe7a 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.LfW 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key f58ffbae1737425810cc228e7ff5741e43096ff0206bbe7a 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 f58ffbae1737425810cc228e7ff5741e43096ff0206bbe7a 0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=f58ffbae1737425810cc228e7ff5741e43096ff0206bbe7a 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.LfW 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.LfW 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.LfW 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=802b4625f232e2f168b7eac761a8b138e125aa1cca346e29 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Dur 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 802b4625f232e2f168b7eac761a8b138e125aa1cca346e29 2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 802b4625f232e2f168b7eac761a8b138e125aa1cca346e29 2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=802b4625f232e2f168b7eac761a8b138e125aa1cca346e29 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Dur 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Dur 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Dur 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.185 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=2199fee320c72a2e023f9966b04bdf37 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.a2s 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 2199fee320c72a2e023f9966b04bdf37 1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 2199fee320c72a2e023f9966b04bdf37 1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=2199fee320c72a2e023f9966b04bdf37 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.a2s 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.a2s 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.a2s 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=01fe1d6656a542ce353a4887b713a736 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.S5r 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 01fe1d6656a542ce353a4887b713a736 1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 01fe1d6656a542ce353a4887b713a736 1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=01fe1d6656a542ce353a4887b713a736 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.S5r 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.S5r 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.S5r 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b3f6b174a80f65795e15438700129891ae23a11f0928bdf1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.ur9 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b3f6b174a80f65795e15438700129891ae23a11f0928bdf1 2 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b3f6b174a80f65795e15438700129891ae23a11f0928bdf1 2 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b3f6b174a80f65795e15438700129891ae23a11f0928bdf1 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:26:02.186 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.ur9 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.ur9 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ur9 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=d25eeb59fd7cd6a2d5f9e8c648534f16 00:26:02.445 19:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.brO 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key d25eeb59fd7cd6a2d5f9e8c648534f16 0 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 d25eeb59fd7cd6a2d5f9e8c648534f16 0 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=d25eeb59fd7cd6a2d5f9e8c648534f16 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.brO 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.brO 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.brO 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=049573c3050ce8388b8d04412278b0fab04f965aeb53c73740a5aa82afc29577 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.y0q 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 049573c3050ce8388b8d04412278b0fab04f965aeb53c73740a5aa82afc29577 3 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 049573c3050ce8388b8d04412278b0fab04f965aeb53c73740a5aa82afc29577 3 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=049573c3050ce8388b8d04412278b0fab04f965aeb53c73740a5aa82afc29577 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:26:02.445 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.y0q 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.y0q 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.y0q 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2228306 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2228306 ']' 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:02.446 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ms2 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Jdk ]] 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Jdk 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.LfW 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Dur ]] 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Dur 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.705 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.a2s 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.S5r ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.S5r 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ur9 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.brO ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.brO 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.y0q 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:02.706 19:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:05.239 Waiting for block devices as requested 00:26:05.499 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:05.499 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:05.499 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:05.758 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:05.758 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:05.758 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:06.017 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:06.017 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:06.017 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:06.017 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:06.276 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:06.276 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:06.276 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:06.276 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:06.535 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:06.535 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:06.535 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:07.103 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:07.363 No valid GPT data, bailing 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo tcp 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:07.363 19:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:07.363 00:26:07.363 Discovery Log Number of Records 2, Generation counter 2 00:26:07.363 =====Discovery Log Entry 0====== 00:26:07.363 trtype: tcp 00:26:07.363 adrfam: ipv4 00:26:07.363 subtype: current discovery subsystem 00:26:07.363 treq: not specified, sq flow control disable supported 00:26:07.363 portid: 1 00:26:07.363 trsvcid: 4420 00:26:07.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:07.363 traddr: 10.0.0.1 00:26:07.363 eflags: none 00:26:07.363 sectype: none 00:26:07.363 =====Discovery Log Entry 1====== 00:26:07.363 trtype: tcp 00:26:07.363 adrfam: ipv4 00:26:07.363 subtype: nvme subsystem 00:26:07.363 treq: not specified, sq flow control disable supported 00:26:07.363 portid: 1 00:26:07.363 trsvcid: 4420 00:26:07.363 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:07.363 traddr: 10.0.0.1 00:26:07.363 eflags: none 00:26:07.363 sectype: none 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.363 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.364 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.623 nvme0n1 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:07.623 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.624 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.624 nvme0n1 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:07.883 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.884 nvme0n1 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:07.884 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 nvme0n1 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.144 19:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 nvme0n1 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:08.404 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.405 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.664 nvme0n1 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:08.664 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.665 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.924 nvme0n1 00:26:08.924 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.925 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.184 nvme0n1 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.184 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.185 19:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.445 nvme0n1 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.445 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.704 nvme0n1 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:09.704 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.705 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.964 nvme0n1 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.964 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:09.965 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.224 nvme0n1 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:10.224 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.225 19:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.484 nvme0n1 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:10.484 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.485 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.744 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.004 nvme0n1 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.004 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.264 nvme0n1 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.264 19:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.524 nvme0n1 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:11.524 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:11.525 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.094 nvme0n1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.094 19:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.353 nvme0n1 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.353 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.612 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.872 nvme0n1 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.872 19:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 nvme0n1 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.441 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.699 nvme0n1 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.699 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.958 19:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.526 nvme0n1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.526 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.095 nvme0n1 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.095 19:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.663 nvme0n1 00:26:15.663 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.922 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.923 19:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.491 nvme0n1 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:16.491 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.492 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.060 nvme0n1 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.060 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.320 nvme0n1 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.320 19:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.320 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.580 nvme0n1 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.580 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.839 nvme0n1 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:17.839 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.840 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 nvme0n1 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.099 nvme0n1 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.099 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.359 19:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.359 nvme0n1 00:26:18.359 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.359 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.359 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.359 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.359 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.619 nvme0n1 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.619 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.879 nvme0n1 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.879 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.138 nvme0n1 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.138 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:19.397 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:19.398 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.398 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.398 19:33:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.398 nvme0n1 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.398 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.657 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.917 nvme0n1 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:19.917 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.918 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.178 nvme0n1 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.178 19:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.437 nvme0n1 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.437 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.438 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.697 nvme0n1 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.697 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.957 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 nvme0n1 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.217 19:33:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.477 nvme0n1 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.477 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.736 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.995 nvme0n1 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.995 19:33:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 nvme0n1 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.563 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.822 nvme0n1 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.822 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.081 19:33:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.340 nvme0n1 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:23.340 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.341 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.924 nvme0n1 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:23.924 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.183 19:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.751 nvme0n1 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:24.751 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.752 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 nvme0n1 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 19:33:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.320 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.889 nvme0n1 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.889 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.149 19:33:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 nvme0n1 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 nvme0n1 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.717 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.978 nvme0n1 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:26.978 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.238 nvme0n1 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.238 19:33:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.498 nvme0n1 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.498 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.757 nvme0n1 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.757 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.758 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 nvme0n1 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:28.017 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.018 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.277 nvme0n1 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.277 19:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.537 nvme0n1 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.537 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.538 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.797 nvme0n1 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.797 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.798 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.057 nvme0n1 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.057 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.058 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.317 nvme0n1 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.317 19:33:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.317 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.318 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.577 nvme0n1 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.577 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.837 nvme0n1 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.837 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:30.096 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.097 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.356 nvme0n1 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.356 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.357 19:33:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.616 nvme0n1 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.616 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.185 nvme0n1 00:26:31.185 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.186 19:33:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.445 nvme0n1 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:31.445 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.446 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.014 nvme0n1 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.014 19:33:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.273 nvme0n1 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.273 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.533 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.793 nvme0n1 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDYyN2NhYWM5OTgwNzA3ZTllNTFjYmIzYzllNzJkOWXtnrYH: 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODU3NWQxNGUxNjc3ZTVkMzQ3NjUxZDRiZDQyNzM4Nzk3MTgxNWY4Yzc3YjVjZTM3NjAyMzRmMzRiNmQ0Yzg2ZQbCgM4=: 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.793 19:33:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.372 nvme0n1 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.372 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.638 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.206 nvme0n1 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.206 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.207 19:33:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.774 nvme0n1 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:34.774 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjNmNmIxNzRhODBmNjU3OTVlMTU0Mzg3MDAxMjk4OTFhZTIzYTExZjA5MjhiZGYxjGX3sg==: 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: ]] 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDI1ZWViNTlmZDdjZDZhMmQ1ZjllOGM2NDg1MzRmMTaTWG2j: 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.775 19:33:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.342 nvme0n1 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:35.342 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDQ5NTczYzMwNTBjZTgzODhiOGQwNDQxMjI3OGIwZmFiMDRmOTY1YWViNTNjNzM3NDBhNWFhODJhZmMyOTU3Nw/Gm3I=: 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.343 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.982 nvme0n1 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.982 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.242 request: 00:26:36.242 { 00:26:36.242 "name": "nvme0", 00:26:36.242 "trtype": "tcp", 00:26:36.242 "traddr": "10.0.0.1", 00:26:36.242 "adrfam": "ipv4", 00:26:36.242 "trsvcid": "4420", 00:26:36.242 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:36.242 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:36.242 "prchk_reftag": false, 00:26:36.242 "prchk_guard": false, 00:26:36.242 "hdgst": false, 00:26:36.242 "ddgst": false, 00:26:36.242 "allow_unrecognized_csi": false, 00:26:36.242 "method": "bdev_nvme_attach_controller", 00:26:36.242 "req_id": 1 00:26:36.242 } 00:26:36.242 Got JSON-RPC error response 00:26:36.242 response: 00:26:36.242 { 00:26:36.242 "code": -5, 00:26:36.242 "message": "Input/output error" 00:26:36.242 } 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:36.242 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.243 request: 00:26:36.243 { 00:26:36.243 "name": "nvme0", 00:26:36.243 "trtype": "tcp", 00:26:36.243 "traddr": "10.0.0.1", 00:26:36.243 "adrfam": "ipv4", 00:26:36.243 "trsvcid": "4420", 00:26:36.243 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:36.243 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:36.243 "prchk_reftag": false, 00:26:36.243 "prchk_guard": false, 00:26:36.243 "hdgst": false, 00:26:36.243 "ddgst": false, 00:26:36.243 "dhchap_key": "key2", 00:26:36.243 "allow_unrecognized_csi": false, 00:26:36.243 "method": "bdev_nvme_attach_controller", 00:26:36.243 "req_id": 1 00:26:36.243 } 00:26:36.243 Got JSON-RPC error response 00:26:36.243 response: 00:26:36.243 { 00:26:36.243 "code": -5, 00:26:36.243 "message": "Input/output error" 00:26:36.243 } 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.243 19:33:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.243 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.502 request: 00:26:36.502 { 00:26:36.502 "name": "nvme0", 00:26:36.502 "trtype": "tcp", 00:26:36.502 "traddr": "10.0.0.1", 00:26:36.502 "adrfam": "ipv4", 00:26:36.502 "trsvcid": "4420", 00:26:36.502 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:36.502 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:36.502 "prchk_reftag": false, 00:26:36.502 "prchk_guard": false, 00:26:36.502 "hdgst": false, 00:26:36.502 "ddgst": false, 00:26:36.502 "dhchap_key": "key1", 00:26:36.502 "dhchap_ctrlr_key": "ckey2", 00:26:36.502 "allow_unrecognized_csi": false, 00:26:36.502 "method": "bdev_nvme_attach_controller", 00:26:36.502 "req_id": 1 00:26:36.502 } 00:26:36.502 Got JSON-RPC error response 00:26:36.502 response: 00:26:36.502 { 00:26:36.502 "code": -5, 00:26:36.502 "message": "Input/output error" 00:26:36.502 } 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.502 nvme0n1 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.502 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 request: 00:26:36.761 { 00:26:36.761 "name": "nvme0", 00:26:36.761 "dhchap_key": "key1", 00:26:36.761 "dhchap_ctrlr_key": "ckey2", 00:26:36.761 "method": "bdev_nvme_set_keys", 00:26:36.761 "req_id": 1 00:26:36.761 } 00:26:36.761 Got JSON-RPC error response 00:26:36.761 response: 00:26:36.761 { 00:26:36.761 "code": -13, 00:26:36.761 "message": "Permission denied" 00:26:36.761 } 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:36.761 19:34:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:37.698 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.698 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:37.698 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:37.698 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.698 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.956 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:37.956 19:34:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:38.891 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.891 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:38.891 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.891 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjU4ZmZiYWUxNzM3NDI1ODEwY2MyMjhlN2ZmNTc0MWU0MzA5NmZmMDIwNmJiZTdhZMJgKQ==: 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: ]] 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODAyYjQ2MjVmMjMyZTJmMTY4YjdlYWM3NjFhOGIxMzhlMTI1YWExY2NhMzQ2ZTI5uWLLHg==: 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.892 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.151 nvme0n1 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjE5OWZlZTMyMGM3MmEyZTAyM2Y5OTY2YjA0YmRmMzcPR1/p: 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: ]] 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDFmZTFkNjY1NmE1NDJjZTM1M2E0ODg3YjcxM2E3MzYhB/bT: 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.151 request: 00:26:39.151 { 00:26:39.151 "name": "nvme0", 00:26:39.151 "dhchap_key": "key2", 00:26:39.151 "dhchap_ctrlr_key": "ckey1", 00:26:39.151 "method": "bdev_nvme_set_keys", 00:26:39.151 "req_id": 1 00:26:39.151 } 00:26:39.151 Got JSON-RPC error response 00:26:39.151 response: 00:26:39.151 { 00:26:39.151 "code": -13, 00:26:39.151 "message": "Permission denied" 00:26:39.151 } 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:39.151 19:34:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:40.089 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:40.089 rmmod nvme_tcp 00:26:40.348 rmmod nvme_fabrics 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 2228306 ']' 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 2228306 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2228306 ']' 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2228306 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2228306 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2228306' 00:26:40.348 killing process with pid 2228306 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2228306 00:26:40.348 19:34:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2228306 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-save 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@789 -- # iptables-restore 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:40.348 19:34:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:26:42.883 19:34:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:45.417 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:45.417 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:45.417 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:45.417 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:45.417 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:45.418 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:45.677 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:47.055 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:47.055 19:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ms2 /tmp/spdk.key-null.LfW /tmp/spdk.key-sha256.a2s /tmp/spdk.key-sha384.ur9 /tmp/spdk.key-sha512.y0q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:47.055 19:34:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:49.592 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:49.592 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:49.592 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:49.852 00:26:49.852 real 0m54.331s 00:26:49.852 user 0m48.412s 00:26:49.852 sys 0m12.698s 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 ************************************ 00:26:49.852 END TEST nvmf_auth_host 00:26:49.852 ************************************ 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.852 ************************************ 00:26:49.852 START TEST nvmf_digest 00:26:49.852 ************************************ 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:49.852 * Looking for test storage... 00:26:49.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.852 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.113 --rc genhtml_branch_coverage=1 00:26:50.113 --rc genhtml_function_coverage=1 00:26:50.113 --rc genhtml_legend=1 00:26:50.113 --rc geninfo_all_blocks=1 00:26:50.113 --rc geninfo_unexecuted_blocks=1 00:26:50.113 00:26:50.113 ' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.113 --rc genhtml_branch_coverage=1 00:26:50.113 --rc genhtml_function_coverage=1 00:26:50.113 --rc genhtml_legend=1 00:26:50.113 --rc geninfo_all_blocks=1 00:26:50.113 --rc geninfo_unexecuted_blocks=1 00:26:50.113 00:26:50.113 ' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.113 --rc genhtml_branch_coverage=1 00:26:50.113 --rc genhtml_function_coverage=1 00:26:50.113 --rc genhtml_legend=1 00:26:50.113 --rc geninfo_all_blocks=1 00:26:50.113 --rc geninfo_unexecuted_blocks=1 00:26:50.113 00:26:50.113 ' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:50.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:50.113 --rc genhtml_branch_coverage=1 00:26:50.113 --rc genhtml_function_coverage=1 00:26:50.113 --rc genhtml_legend=1 00:26:50.113 --rc geninfo_all_blocks=1 00:26:50.113 --rc geninfo_unexecuted_blocks=1 00:26:50.113 00:26:50.113 ' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.113 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:50.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # prepare_net_devs 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # local -g is_hw=no 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # remove_spdk_ns 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:50.114 19:34:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:56.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:56.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:56.686 Found net devices under 0000:86:00.0: cvl_0_0 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:26:56.686 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ up == up ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:56.687 Found net devices under 0000:86:00.1: cvl_0_1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # is_hw=yes 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:56.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:26:56.687 00:26:56.687 --- 10.0.0.2 ping statistics --- 00:26:56.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.687 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:26:56.687 00:26:56.687 --- 10.0.0.1 ping statistics --- 00:26:56.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.687 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # return 0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 ************************************ 00:26:56.687 START TEST nvmf_digest_clean 00:26:56.687 ************************************ 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # nvmfpid=2242085 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # waitforlisten 2242085 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2242085 ']' 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 [2024-10-17 19:34:19.746596] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:26:56.687 [2024-10-17 19:34:19.746658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.687 [2024-10-17 19:34:19.827434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.687 [2024-10-17 19:34:19.867570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.687 [2024-10-17 19:34:19.867609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.687 [2024-10-17 19:34:19.867616] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.687 [2024-10-17 19:34:19.867622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.687 [2024-10-17 19:34:19.867627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.687 [2024-10-17 19:34:19.868190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.687 19:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.687 null0 00:26:56.687 [2024-10-17 19:34:20.015014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:56.687 [2024-10-17 19:34:20.039206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2242108 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2242108 /var/tmp/bperf.sock 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2242108 ']' 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.687 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.688 [2024-10-17 19:34:20.092397] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:26:56.688 [2024-10-17 19:34:20.092438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242108 ] 00:26:56.688 [2024-10-17 19:34:20.167490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.688 [2024-10-17 19:34:20.209621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:56.688 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:56.946 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.946 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.205 nvme0n1 00:26:57.205 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.205 19:34:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.205 Running I/O for 2 seconds... 00:26:59.520 26799.00 IOPS, 104.68 MiB/s [2024-10-17T17:34:23.304Z] 26150.00 IOPS, 102.15 MiB/s 00:26:59.520 Latency(us) 00:26:59.520 [2024-10-17T17:34:23.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.521 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:59.521 nvme0n1 : 2.00 26157.62 102.18 0.00 0.00 4888.22 2543.42 11234.74 00:26:59.521 [2024-10-17T17:34:23.305Z] =================================================================================================================== 00:26:59.521 [2024-10-17T17:34:23.305Z] Total : 26157.62 102.18 0.00 0.00 4888.22 2543.42 11234.74 00:26:59.521 { 00:26:59.521 "results": [ 00:26:59.521 { 00:26:59.521 "job": "nvme0n1", 00:26:59.521 "core_mask": "0x2", 00:26:59.521 "workload": "randread", 00:26:59.521 "status": "finished", 00:26:59.521 "queue_depth": 128, 00:26:59.521 "io_size": 4096, 00:26:59.521 "runtime": 2.004311, 00:26:59.521 "iops": 26157.617256004683, 00:26:59.521 "mibps": 102.1781924062683, 00:26:59.521 "io_failed": 0, 00:26:59.521 "io_timeout": 0, 00:26:59.521 "avg_latency_us": 4888.221165716611, 00:26:59.521 "min_latency_us": 2543.4209523809523, 00:26:59.521 "max_latency_us": 11234.742857142857 00:26:59.521 } 00:26:59.521 ], 00:26:59.521 "core_count": 1 00:26:59.521 } 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:59.521 | select(.opcode=="crc32c") 00:26:59.521 | "\(.module_name) \(.executed)"' 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2242108 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2242108 ']' 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2242108 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242108 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242108' 00:26:59.521 killing process with pid 2242108 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2242108 00:26:59.521 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.521 00:26:59.521 Latency(us) 00:26:59.521 [2024-10-17T17:34:23.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.521 [2024-10-17T17:34:23.305Z] =================================================================================================================== 00:26:59.521 [2024-10-17T17:34:23.305Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.521 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2242108 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2242588 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2242588 /var/tmp/bperf.sock 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2242588 ']' 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:59.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.780 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:59.780 [2024-10-17 19:34:23.455887] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:26:59.781 [2024-10-17 19:34:23.455934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242588 ] 00:26:59.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:59.781 Zero copy mechanism will not be used. 00:26:59.781 [2024-10-17 19:34:23.531002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.040 [2024-10-17 19:34:23.573054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.040 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.040 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:00.040 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:00.040 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:00.040 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:00.299 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.299 19:34:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.558 nvme0n1 00:27:00.558 19:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:00.558 19:34:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:00.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:00.558 Zero copy mechanism will not be used. 00:27:00.558 Running I/O for 2 seconds... 00:27:02.432 5714.00 IOPS, 714.25 MiB/s [2024-10-17T17:34:26.216Z] 5694.00 IOPS, 711.75 MiB/s 00:27:02.432 Latency(us) 00:27:02.432 [2024-10-17T17:34:26.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.432 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:02.432 nvme0n1 : 2.00 5695.02 711.88 0.00 0.00 2806.85 639.76 7583.45 00:27:02.432 [2024-10-17T17:34:26.216Z] =================================================================================================================== 00:27:02.432 [2024-10-17T17:34:26.216Z] Total : 5695.02 711.88 0.00 0.00 2806.85 639.76 7583.45 00:27:02.432 { 00:27:02.432 "results": [ 00:27:02.432 { 00:27:02.432 "job": "nvme0n1", 00:27:02.432 "core_mask": "0x2", 00:27:02.432 "workload": "randread", 00:27:02.432 "status": "finished", 00:27:02.432 "queue_depth": 16, 00:27:02.432 "io_size": 131072, 00:27:02.432 "runtime": 2.002451, 00:27:02.432 "iops": 5695.020752068341, 00:27:02.432 "mibps": 711.8775940085426, 00:27:02.432 "io_failed": 0, 00:27:02.432 "io_timeout": 0, 00:27:02.432 "avg_latency_us": 2806.8479497586477, 00:27:02.432 "min_latency_us": 639.7561904761905, 00:27:02.432 "max_latency_us": 7583.451428571429 00:27:02.432 } 00:27:02.432 ], 00:27:02.432 "core_count": 1 00:27:02.432 } 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:02.691 | select(.opcode=="crc32c") 00:27:02.691 | "\(.module_name) \(.executed)"' 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2242588 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2242588 ']' 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2242588 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:02.691 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242588 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242588' 00:27:02.951 killing process with pid 2242588 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2242588 00:27:02.951 Received shutdown signal, test time was about 2.000000 seconds 00:27:02.951 00:27:02.951 Latency(us) 00:27:02.951 [2024-10-17T17:34:26.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.951 [2024-10-17T17:34:26.735Z] =================================================================================================================== 00:27:02.951 [2024-10-17T17:34:26.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2242588 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2243207 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2243207 /var/tmp/bperf.sock 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2243207 ']' 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:02.951 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:02.951 [2024-10-17 19:34:26.690710] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:02.951 [2024-10-17 19:34:26.690756] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243207 ] 00:27:03.210 [2024-10-17 19:34:26.766806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.210 [2024-10-17 19:34:26.809194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.210 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:03.210 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:03.210 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:03.210 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:03.210 19:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:03.470 19:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.470 19:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:03.727 nvme0n1 00:27:03.727 19:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:03.727 19:34:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:03.727 Running I/O for 2 seconds... 00:27:06.042 28375.00 IOPS, 110.84 MiB/s [2024-10-17T17:34:29.826Z] 28530.50 IOPS, 111.45 MiB/s 00:27:06.042 Latency(us) 00:27:06.042 [2024-10-17T17:34:29.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.042 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:06.042 nvme0n1 : 2.01 28550.51 111.53 0.00 0.00 4477.07 1802.24 9175.04 00:27:06.042 [2024-10-17T17:34:29.826Z] =================================================================================================================== 00:27:06.042 [2024-10-17T17:34:29.826Z] Total : 28550.51 111.53 0.00 0.00 4477.07 1802.24 9175.04 00:27:06.042 { 00:27:06.042 "results": [ 00:27:06.042 { 00:27:06.042 "job": "nvme0n1", 00:27:06.042 "core_mask": "0x2", 00:27:06.042 "workload": "randwrite", 00:27:06.042 "status": "finished", 00:27:06.042 "queue_depth": 128, 00:27:06.042 "io_size": 4096, 00:27:06.042 "runtime": 2.005323, 00:27:06.042 "iops": 28550.51281015577, 00:27:06.042 "mibps": 111.52544066467098, 00:27:06.042 "io_failed": 0, 00:27:06.042 "io_timeout": 0, 00:27:06.042 "avg_latency_us": 4477.074937258435, 00:27:06.042 "min_latency_us": 1802.24, 00:27:06.042 "max_latency_us": 9175.04 00:27:06.042 } 00:27:06.042 ], 00:27:06.042 "core_count": 1 00:27:06.042 } 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:06.042 | select(.opcode=="crc32c") 00:27:06.042 | "\(.module_name) \(.executed)"' 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2243207 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2243207 ']' 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2243207 00:27:06.042 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2243207 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2243207' 00:27:06.043 killing process with pid 2243207 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2243207 00:27:06.043 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.043 00:27:06.043 Latency(us) 00:27:06.043 [2024-10-17T17:34:29.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.043 [2024-10-17T17:34:29.827Z] =================================================================================================================== 00:27:06.043 [2024-10-17T17:34:29.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.043 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2243207 00:27:06.301 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2243746 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2243746 /var/tmp/bperf.sock 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2243746 ']' 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:06.302 19:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:06.302 [2024-10-17 19:34:29.955467] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:06.302 [2024-10-17 19:34:29.955515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2243746 ] 00:27:06.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:06.302 Zero copy mechanism will not be used. 00:27:06.302 [2024-10-17 19:34:30.032375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.302 [2024-10-17 19:34:30.081473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.561 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:06.561 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:06.561 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:06.561 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:06.561 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:06.820 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:06.820 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.079 nvme0n1 00:27:07.079 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:07.079 19:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.079 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:07.079 Zero copy mechanism will not be used. 00:27:07.079 Running I/O for 2 seconds... 00:27:09.398 6306.00 IOPS, 788.25 MiB/s [2024-10-17T17:34:33.182Z] 6683.00 IOPS, 835.38 MiB/s 00:27:09.398 Latency(us) 00:27:09.398 [2024-10-17T17:34:33.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.398 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:09.398 nvme0n1 : 2.00 6680.09 835.01 0.00 0.00 2390.96 1763.23 9549.53 00:27:09.398 [2024-10-17T17:34:33.182Z] =================================================================================================================== 00:27:09.398 [2024-10-17T17:34:33.182Z] Total : 6680.09 835.01 0.00 0.00 2390.96 1763.23 9549.53 00:27:09.398 { 00:27:09.398 "results": [ 00:27:09.398 { 00:27:09.398 "job": "nvme0n1", 00:27:09.399 "core_mask": "0x2", 00:27:09.399 "workload": "randwrite", 00:27:09.399 "status": "finished", 00:27:09.399 "queue_depth": 16, 00:27:09.399 "io_size": 131072, 00:27:09.399 "runtime": 2.003265, 00:27:09.399 "iops": 6680.0947453282515, 00:27:09.399 "mibps": 835.0118431660314, 00:27:09.399 "io_failed": 0, 00:27:09.399 "io_timeout": 0, 00:27:09.399 "avg_latency_us": 2390.9596532655805, 00:27:09.399 "min_latency_us": 1763.230476190476, 00:27:09.399 "max_latency_us": 9549.531428571428 00:27:09.399 } 00:27:09.399 ], 00:27:09.399 "core_count": 1 00:27:09.399 } 00:27:09.399 19:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:09.399 19:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:09.399 19:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:09.399 19:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:09.399 | select(.opcode=="crc32c") 00:27:09.399 | "\(.module_name) \(.executed)"' 00:27:09.399 19:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2243746 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2243746 ']' 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2243746 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2243746 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2243746' 00:27:09.399 killing process with pid 2243746 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2243746 00:27:09.399 Received shutdown signal, test time was about 2.000000 seconds 00:27:09.399 00:27:09.399 Latency(us) 00:27:09.399 [2024-10-17T17:34:33.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.399 [2024-10-17T17:34:33.183Z] =================================================================================================================== 00:27:09.399 [2024-10-17T17:34:33.183Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.399 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2243746 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2242085 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2242085 ']' 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2242085 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2242085 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2242085' 00:27:09.658 killing process with pid 2242085 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2242085 00:27:09.658 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2242085 00:27:09.918 00:27:09.918 real 0m13.775s 00:27:09.918 user 0m26.209s 00:27:09.918 sys 0m4.623s 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 ************************************ 00:27:09.918 END TEST nvmf_digest_clean 00:27:09.918 ************************************ 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 ************************************ 00:27:09.918 START TEST nvmf_digest_error 00:27:09.918 ************************************ 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # nvmfpid=2244263 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # waitforlisten 2244263 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2244263 ']' 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:09.918 19:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:09.918 [2024-10-17 19:34:33.597567] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:09.918 [2024-10-17 19:34:33.597621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.918 [2024-10-17 19:34:33.676977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.177 [2024-10-17 19:34:33.718363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.177 [2024-10-17 19:34:33.718399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.177 [2024-10-17 19:34:33.718407] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:10.177 [2024-10-17 19:34:33.718414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:10.177 [2024-10-17 19:34:33.718419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.177 [2024-10-17 19:34:33.718999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.746 [2024-10-17 19:34:34.465154] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.746 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.005 null0 00:27:11.005 [2024-10-17 19:34:34.559839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.005 [2024-10-17 19:34:34.584033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2244494 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2244494 /var/tmp/bperf.sock 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2244494 ']' 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:11.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.005 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.005 [2024-10-17 19:34:34.637226] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:11.005 [2024-10-17 19:34:34.637266] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244494 ] 00:27:11.005 [2024-10-17 19:34:34.711360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.005 [2024-10-17 19:34:34.752859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.264 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:11.264 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:11.264 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.264 19:34:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.264 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.833 nvme0n1 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.833 19:34:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.833 Running I/O for 2 seconds... 00:27:11.833 [2024-10-17 19:34:35.446016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.833 [2024-10-17 19:34:35.446049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.833 [2024-10-17 19:34:35.446060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.833 [2024-10-17 19:34:35.457054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.833 [2024-10-17 19:34:35.457078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.833 [2024-10-17 19:34:35.457088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.833 [2024-10-17 19:34:35.465645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.833 [2024-10-17 19:34:35.465675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.833 [2024-10-17 19:34:35.465684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.477138] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.477161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.477170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.488682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.488704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.488714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.497587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.497612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.497622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.509268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.509290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.509298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.520352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.520374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.520383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.529118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.529140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.529152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.539996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.540026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.551782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.551802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.551810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.560291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.560311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.560319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.570368] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.570389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.570397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.580255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.580276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.580284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.588699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.588719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.588727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.599400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.599421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.599429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.834 [2024-10-17 19:34:35.610218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:11.834 [2024-10-17 19:34:35.610239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.834 [2024-10-17 19:34:35.610248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.618583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.618614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.618623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.631422] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.631444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.631452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.642554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.642574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.642583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.651715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.651736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.651744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.660858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.660877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.660885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.670274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.670295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.670304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.682526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.682547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.682556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.691170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.691201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.703535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.703556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.703565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.714928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.714950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.714958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.727418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.727439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.727448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.736039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.736060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.736068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.748509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.748538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.760614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.760635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.760644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.771305] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.094 [2024-10-17 19:34:35.771325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.094 [2024-10-17 19:34:35.771333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.094 [2024-10-17 19:34:35.779427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.779447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.791263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.791284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.791292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.801811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.801832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.801844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.814445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.814466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.814474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.823076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.823097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.823105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.834389] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.834410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.834419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.846425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.846446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.846454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.858478] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.858498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.858506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.095 [2024-10-17 19:34:35.871412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.095 [2024-10-17 19:34:35.871433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.095 [2024-10-17 19:34:35.871441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.354 [2024-10-17 19:34:35.879800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.354 [2024-10-17 19:34:35.879820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.354 [2024-10-17 19:34:35.879829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.354 [2024-10-17 19:34:35.891984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.354 [2024-10-17 19:34:35.892005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.354 [2024-10-17 19:34:35.892013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.354 [2024-10-17 19:34:35.904594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.354 [2024-10-17 19:34:35.904625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.354 [2024-10-17 19:34:35.904634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.354 [2024-10-17 19:34:35.914561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.354 [2024-10-17 19:34:35.914580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.354 [2024-10-17 19:34:35.914589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.354 [2024-10-17 19:34:35.922625] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.922646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.922654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.933688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.933708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.933717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.941955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.941974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.941982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.954206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.954227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.954235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.963070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.963090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.963098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.974274] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.974295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.974303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.985457] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.985478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.985486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:35.998045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:35.998066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:35.998074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.006136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.006157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.006164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.018282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.018304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.018313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.026294] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.026315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.026324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.038012] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.038033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.038041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.045937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.045958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.045966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.058011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.058031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.058040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.066080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.066101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.066108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.076416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.076436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.076448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.086314] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.086334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.086342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.098303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.098324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.098332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.109633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.109654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.109662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.117403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.117423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.117431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.127767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.127787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.127795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.355 [2024-10-17 19:34:36.135900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.355 [2024-10-17 19:34:36.135921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.355 [2024-10-17 19:34:36.135929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.148795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.148816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.148825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.159746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.159766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.159774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.167983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.168011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.179527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.179556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.187898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.187919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.187928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.197830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.197851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.197859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.207640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.207661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.207669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.215809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.215830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.215839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.225365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.225386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.225394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.236916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.236937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.236945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.246135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.246156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.246167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.254484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.254504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.254512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.263288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.263308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.263316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.272379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.272399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:23748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.615 [2024-10-17 19:34:36.272408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.615 [2024-10-17 19:34:36.281841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.615 [2024-10-17 19:34:36.281861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.281869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.291412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.291432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.291440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.300488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.300508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.300516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.309596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.309623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.309631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.321795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.321815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.321823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.334105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.334129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.334137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.345050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.345070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.345078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.354103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.354124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.354132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.365469] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.365491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.365499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.377739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.377768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.388783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.388812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.616 [2024-10-17 19:34:36.397095] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.616 [2024-10-17 19:34:36.397118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.616 [2024-10-17 19:34:36.397126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.408121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.408143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.408153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.415948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.415970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.415979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.425554] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.425575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.425583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 24557.00 IOPS, 95.93 MiB/s [2024-10-17T17:34:36.660Z] [2024-10-17 19:34:36.437475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.437496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:10599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.437505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.445837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.445858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.445866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.454844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.454865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.454873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.464359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.464380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.464388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.474749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.474769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.474778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.482932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.482953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.482962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.492958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.492979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.492987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.501579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.501606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.501619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.513242] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.876 [2024-10-17 19:34:36.513263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.876 [2024-10-17 19:34:36.513271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.876 [2024-10-17 19:34:36.525767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.525789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.525798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.537048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.537069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.537078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.548733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.548755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.548763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.556709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.556729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.556738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.567218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.567240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.567248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.576638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.576660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.576668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.586157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.586177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.586185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.596371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.596392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.596401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.606805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.606827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.606836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.616773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.616794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.616803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.625624] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.625644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.625653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.635557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.635578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.635586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.645811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.645832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.645840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:12.877 [2024-10-17 19:34:36.654964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:12.877 [2024-10-17 19:34:36.654984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.877 [2024-10-17 19:34:36.654992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.663742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.663763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.663772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.672142] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.672163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.672175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.681931] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.681951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.681959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.692502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.692523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.692531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.703200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.703220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.703228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.711428] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.711449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.711457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.720994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.721016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.721024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.730622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.730660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.730668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.739883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.739904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.739912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.749197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.760425] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.760449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.760457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.768541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.768561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.768569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.779890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.779911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.779919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.791223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.791243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.791252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.137 [2024-10-17 19:34:36.802060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.137 [2024-10-17 19:34:36.802080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.137 [2024-10-17 19:34:36.802089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.811877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.811897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.811905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.820488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.820516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.829316] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.829336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.829344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.839055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.839075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.839083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.848872] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.848892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.848900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.856949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.856969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.856978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.866740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.866760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.866768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.875589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.875613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.875622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.884964] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.884984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.884992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.894531] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.894552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.894560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.903742] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.903762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.903770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.138 [2024-10-17 19:34:36.912866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.138 [2024-10-17 19:34:36.912885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.138 [2024-10-17 19:34:36.912893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.922076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.922097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.922109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.931629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.931650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.931658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.940044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.940063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.940072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.949312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.949333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.949341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.959353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.959373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.959382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.970174] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.970193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.397 [2024-10-17 19:34:36.970201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.397 [2024-10-17 19:34:36.979379] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.397 [2024-10-17 19:34:36.979399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:36.979407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:36.987818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:36.987838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:36.987846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:36.996773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:36.996793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:36.996802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.006626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.006650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.006658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.016944] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.016965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.016974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.025419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.025440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.034975] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.034995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.035003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.043665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.043686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.043695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.053329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.053350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.053358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.062780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.062800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.073283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.073304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.073312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.083413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.083433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.083442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.091815] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.091834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.091843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.101475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.101504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.112423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.112443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.112451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.120087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.120107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.120116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.129894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.129917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.129926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.138829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.138850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.138858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.148005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.148026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.148035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.157941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.157961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.157969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.165722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.165743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.398 [2024-10-17 19:34:37.175694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.398 [2024-10-17 19:34:37.175714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.398 [2024-10-17 19:34:37.175722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.186653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.186675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.186684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.195716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.195736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.195744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.207582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.207608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.207616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.219124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.219144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.219153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.226812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.226832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.226841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.236511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.236531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.236540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.246900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.246920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.246929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.255939] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.255960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.255968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.263876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.263896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.263904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.274062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.274081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.274090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.284826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.284846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.284854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.295714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.295733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.295742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.306730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.306750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.306758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.318776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.318796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.318804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.326856] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.326876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.326885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.336897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.336917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.336928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.348140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.348161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.348169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.357853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.357873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.357881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.367325] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.367345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.367353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.375429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.375449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.375457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.384902] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.384922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.384930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.395929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.395949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.658 [2024-10-17 19:34:37.395957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.658 [2024-10-17 19:34:37.406543] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.658 [2024-10-17 19:34:37.406563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.659 [2024-10-17 19:34:37.406571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.659 [2024-10-17 19:34:37.414280] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.659 [2024-10-17 19:34:37.414299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.659 [2024-10-17 19:34:37.414309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.659 [2024-10-17 19:34:37.425499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.659 [2024-10-17 19:34:37.425523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.659 [2024-10-17 19:34:37.425532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.659 25455.00 IOPS, 99.43 MiB/s [2024-10-17T17:34:37.443Z] [2024-10-17 19:34:37.435583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xc9cac0) 00:27:13.659 [2024-10-17 19:34:37.435607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.659 [2024-10-17 19:34:37.435616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:13.659 00:27:13.659 Latency(us) 00:27:13.659 [2024-10-17T17:34:37.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:13.659 nvme0n1 : 2.00 25459.40 99.45 0.00 0.00 5022.11 2481.01 17850.76 00:27:13.659 [2024-10-17T17:34:37.443Z] =================================================================================================================== 00:27:13.659 [2024-10-17T17:34:37.443Z] Total : 25459.40 99.45 0.00 0.00 5022.11 2481.01 17850.76 00:27:13.659 { 00:27:13.659 "results": [ 00:27:13.659 { 00:27:13.659 "job": "nvme0n1", 00:27:13.659 "core_mask": "0x2", 00:27:13.659 "workload": "randread", 00:27:13.659 "status": "finished", 00:27:13.659 "queue_depth": 128, 00:27:13.659 "io_size": 4096, 00:27:13.659 "runtime": 2.004682, 00:27:13.659 "iops": 25459.3995456636, 00:27:13.659 "mibps": 99.45077947524844, 00:27:13.659 "io_failed": 0, 00:27:13.659 "io_timeout": 0, 00:27:13.659 "avg_latency_us": 5022.11244909955, 00:27:13.659 "min_latency_us": 2481.0057142857145, 00:27:13.659 "max_latency_us": 17850.758095238096 00:27:13.659 } 00:27:13.659 ], 00:27:13.659 "core_count": 1 00:27:13.659 } 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.918 | .driver_specific 00:27:13.918 | .nvme_error 00:27:13.918 | .status_code 00:27:13.918 | .command_transient_transport_error' 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2244494 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2244494 ']' 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2244494 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:13.918 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244494 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244494' 00:27:14.178 killing process with pid 2244494 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2244494 00:27:14.178 Received shutdown signal, test time was about 2.000000 seconds 00:27:14.178 00:27:14.178 Latency(us) 00:27:14.178 [2024-10-17T17:34:37.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.178 [2024-10-17T17:34:37.962Z] =================================================================================================================== 00:27:14.178 [2024-10-17T17:34:37.962Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2244494 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2244996 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2244996 /var/tmp/bperf.sock 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2244996 ']' 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:14.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:14.178 19:34:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.178 [2024-10-17 19:34:37.917935] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:14.178 [2024-10-17 19:34:37.917988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244996 ] 00:27:14.178 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:14.178 Zero copy mechanism will not be used. 00:27:14.437 [2024-10-17 19:34:37.996591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.437 [2024-10-17 19:34:38.033396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:14.437 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:14.437 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:14.437 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.437 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.695 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:14.953 nvme0n1 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:14.953 19:34:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:15.213 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:15.213 Zero copy mechanism will not be used. 00:27:15.213 Running I/O for 2 seconds... 00:27:15.213 [2024-10-17 19:34:38.752929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.213 [2024-10-17 19:34:38.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.213 [2024-10-17 19:34:38.752976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.213 [2024-10-17 19:34:38.758030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.213 [2024-10-17 19:34:38.758056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.213 [2024-10-17 19:34:38.758065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.213 [2024-10-17 19:34:38.763173] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.763195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.763204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.768267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.768288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.768296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.773417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.773438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.773446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.778557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.778578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.778587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.783643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.783663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.783672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.788722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.788750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.793828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.793849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.793857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.798967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.798988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.798996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.804120] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.804141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.804150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.809262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.809282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.809290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.814370] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.814391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.814399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.819508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.819529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.819537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.824633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.824653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.824665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.829725] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.829745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.829754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.834825] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.834845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.834853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.839934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.839954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.839962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.844983] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.845003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.845011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.850092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.850112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.850121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.855214] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.855234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.855242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.860350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.860371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.860379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.865450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.865470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.865478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.870521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.870542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.870550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.875628] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.875648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.875657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.880731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.880751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.880759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.885783] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.885804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.885812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.890828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.890848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.890856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.895880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.895900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.895909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.214 [2024-10-17 19:34:38.900992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.214 [2024-10-17 19:34:38.901012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.214 [2024-10-17 19:34:38.901020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.906009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.906030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.906038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.911068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.911088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.911099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.916155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.916176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.916184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.921212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.921233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.921242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.926273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.926294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.926302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.931321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.931342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.931350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.936354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.936377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.936385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.941405] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.941425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.941433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.946517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.946537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.946545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.951596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.951623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.951631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.956680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.956706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.956714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.961733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.961754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.961763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.967203] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.967226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.967234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.972210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.972231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.972240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.977221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.977243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.977251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.982289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.982311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.982319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.987335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.987356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.987364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.215 [2024-10-17 19:34:38.992427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.215 [2024-10-17 19:34:38.992449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.215 [2024-10-17 19:34:38.992457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:38.997713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:38.997735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:38.997744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.002873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.002896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.002904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.007986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.008008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.008016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.012921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.012943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.012952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.018036] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.018058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.018067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.022940] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.022962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.022971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.028098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.028119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.033194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.033215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.033223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.038335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.038357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.038366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.043516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.043537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.043549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.048631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.048653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.048661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.053723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.053745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.053754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.058822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.058844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.058852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.063921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.063942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.063950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.068985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.069006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.069014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.074088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.074110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.476 [2024-10-17 19:34:39.074118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.476 [2024-10-17 19:34:39.079238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.476 [2024-10-17 19:34:39.079259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.079267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.084382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.084403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.084411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.089444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.089469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.089477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.094609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.094631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.094639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.099761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.099782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.099791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.104960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.104981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.104990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.110109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.110130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.110138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.115186] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.115207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.115216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.120336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.120358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.120367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.125491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.125512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.125520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.130663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.130683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.130692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.135868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.135889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.135898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.141002] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.141024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.146211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.146232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.146239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.151394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.151415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.151423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.156540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.156560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.156568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.161655] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.161677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.161685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.166855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.166877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.166885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.171997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.172018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.172026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.177134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.177155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.177167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.182211] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.182232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.182240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.187391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.187413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.187421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.192575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.192596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.192609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.197700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.197722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.197730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.202892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.202913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.202921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.208060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.208083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.208091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.213254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.213285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.218515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.218537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.218546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.223704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.477 [2024-10-17 19:34:39.223732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.477 [2024-10-17 19:34:39.223741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.477 [2024-10-17 19:34:39.228943] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.228966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.228975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.478 [2024-10-17 19:34:39.234268] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.234290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.234299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.478 [2024-10-17 19:34:39.239503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.239525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.239533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.478 [2024-10-17 19:34:39.244714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.244735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.244746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.478 [2024-10-17 19:34:39.249928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.249951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.249959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.478 [2024-10-17 19:34:39.255129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.478 [2024-10-17 19:34:39.255153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.478 [2024-10-17 19:34:39.255162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.260412] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.260436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.260445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.265641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.265664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.265672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.270862] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.270886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.270894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.276315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.276347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.281860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.281882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.281891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.287269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.287292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.287300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.292685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.292707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.292715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.297997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.298019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.738 [2024-10-17 19:34:39.298027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.738 [2024-10-17 19:34:39.303490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.738 [2024-10-17 19:34:39.303512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.303520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.308700] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.308721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.308730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.314105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.314127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.314139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.319455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.319477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.319486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.325045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.325069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.325077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.330288] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.330309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.330317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.334906] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.334927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.334936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.337914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.337936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.337944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.343232] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.343253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.343261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.348419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.348439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.348448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.353565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.353586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.353594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.358507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.358529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.358537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.363595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.363622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.363631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.368637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.368658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.368666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.373644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.373665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.373673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.378376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.378398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.378406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.383385] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.383407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.383415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.388540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.388571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.393720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.393742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.393750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.398824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.398847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.398859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.404019] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.404040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.404049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.408722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.408745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.408753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.413639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.413662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.413670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.418743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.418766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.418775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.423685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.423708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.423717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.428953] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.428975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.428983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.434035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.434057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.434065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.439109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.439131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.439140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.444343] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.444368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.739 [2024-10-17 19:34:39.444377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.739 [2024-10-17 19:34:39.449599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.739 [2024-10-17 19:34:39.449629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.449638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.455033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.455056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.455064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.460477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.460500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.460508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.465786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.465808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.465817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.471076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.471098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.471106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.476415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.476437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.476445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.481785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.481807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.481815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.487116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.487145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.492407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.492429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.492438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.497746] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.497768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.497776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.503097] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.503118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.503126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.508253] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.508274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.508282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.513468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.513492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.513500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:15.740 [2024-10-17 19:34:39.518726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:15.740 [2024-10-17 19:34:39.518748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:15.740 [2024-10-17 19:34:39.518758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.000 [2024-10-17 19:34:39.524034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.000 [2024-10-17 19:34:39.524058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.000 [2024-10-17 19:34:39.524067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.000 [2024-10-17 19:34:39.529467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.000 [2024-10-17 19:34:39.529489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.000 [2024-10-17 19:34:39.529498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.000 [2024-10-17 19:34:39.534759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.000 [2024-10-17 19:34:39.534781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.000 [2024-10-17 19:34:39.534793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.000 [2024-10-17 19:34:39.540091] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.000 [2024-10-17 19:34:39.540113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.000 [2024-10-17 19:34:39.540121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.000 [2024-10-17 19:34:39.545438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.545460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.545468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.550794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.550816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.550825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.555981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.556003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.556012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.561146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.561168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.561176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.566490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.566512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.566520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.571796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.571818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.571826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.576991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.577012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.577020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.583163] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.583190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.583198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.590730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.590753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.590761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.597611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.597634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.597642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.604445] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.604467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.604476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.610930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.610953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.610962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.616731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.616754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.616763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.623509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.623532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.623541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.630900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.630924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.630933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.637088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.637111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.637120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.643536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.643559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.643567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.648098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.648120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.648129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.654210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.654233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.654242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.659791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.659815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.659824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.666467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.666490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.666499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.673876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.673899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.673908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.680100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.680121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.680130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.686527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.686549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.686558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.693165] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.693187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.693199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.699055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.001 [2024-10-17 19:34:39.699078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.001 [2024-10-17 19:34:39.699086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.001 [2024-10-17 19:34:39.704309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.704330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.704339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.709650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.709672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.709680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.715126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.715148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.715156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.720471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.720493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.720501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.725848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.725871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.725880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.731115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.731137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.736180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.736202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.736210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.741359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.741385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.741393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.746522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.746544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.002 5868.00 IOPS, 733.50 MiB/s [2024-10-17T17:34:39.786Z] [2024-10-17 19:34:39.752525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.752546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.752555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.758992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.759014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.759023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.765661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.765683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.765692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.770752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.770776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.770784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.776066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.776089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.776098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.781364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.781386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.781395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.002 [2024-10-17 19:34:39.784176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.002 [2024-10-17 19:34:39.784197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.002 [2024-10-17 19:34:39.784205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.789499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.794952] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.794974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.794982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.800501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.800523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.800531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.805846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.805868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.805876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.811349] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.811371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.811379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.816786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.816808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.816816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.821888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.821910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.821918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.826737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.826760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.826768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.832057] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.832079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.832092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.837267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.837288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.837296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.842533] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.842554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.842562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.847724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.847746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.852887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.852908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.852916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.857986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.858007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.858015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.863373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.863394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.863402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.869084] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.869106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.869114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.874407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.874429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.874437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.879682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.879704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.879712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.885107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.885129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.885137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.263 [2024-10-17 19:34:39.890449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.263 [2024-10-17 19:34:39.890470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.263 [2024-10-17 19:34:39.890478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.895821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.895843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.895851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.901073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.901095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.901104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.906353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.906375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.906383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.911658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.911679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.911687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.916852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.916874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.922525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.922547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.922558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.928110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.928132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.928140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.933814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.933835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.933844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.939354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.939376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.939384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.945403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.945425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.945433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.950744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.950767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.950775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.956209] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.956230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.956238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.961622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.961660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.961668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.966961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.966981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.966990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.972304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.972329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.972337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.977733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.977755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.977763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.983055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.983076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.988506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.988528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.988536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.993771] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.993792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.993800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:39.999418] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:39.999439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:39.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:40.005221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:40.005244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:40.005253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:40.011365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:40.011389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:40.011398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:40.017034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:40.017058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:40.017068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:40.021836] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:40.021859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:40.021868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.264 [2024-10-17 19:34:40.026458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.264 [2024-10-17 19:34:40.026480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.264 [2024-10-17 19:34:40.026496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.265 [2024-10-17 19:34:40.031411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.265 [2024-10-17 19:34:40.031434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.265 [2024-10-17 19:34:40.031442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.265 [2024-10-17 19:34:40.036682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.265 [2024-10-17 19:34:40.036705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.265 [2024-10-17 19:34:40.036715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.265 [2024-10-17 19:34:40.041648] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.265 [2024-10-17 19:34:40.041670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.265 [2024-10-17 19:34:40.041679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.265 [2024-10-17 19:34:40.046883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.525 [2024-10-17 19:34:40.046906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.525 [2024-10-17 19:34:40.046917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.525 [2024-10-17 19:34:40.051987] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.525 [2024-10-17 19:34:40.052011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.525 [2024-10-17 19:34:40.052020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.525 [2024-10-17 19:34:40.057275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.525 [2024-10-17 19:34:40.057297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.525 [2024-10-17 19:34:40.057307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.525 [2024-10-17 19:34:40.062467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.525 [2024-10-17 19:34:40.062489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.525 [2024-10-17 19:34:40.062502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.525 [2024-10-17 19:34:40.067686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.525 [2024-10-17 19:34:40.067719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.525 [2024-10-17 19:34:40.067727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.073326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.073348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.073357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.078681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.078703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.078711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.084072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.084094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.084103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.089556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.089578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.089586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.095026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.095047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.095055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.100338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.100360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.100368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.105519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.105540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.105548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.110866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.110892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.110901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.116233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.116255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.116263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.121748] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.121769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.121778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.127169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.127191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.127199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.132450] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.132471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.132480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.137814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.137836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.137845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.143157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.143179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.143188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.148465] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.148495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.153910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.153932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.153941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.159295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.159316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.159325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.164535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.164556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.164564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.170858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.170885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.170894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.176351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.176373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.176381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.181743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.181765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.181774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.187076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.187098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.187107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.192341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.192363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.192371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.197807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.197829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.197837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.203143] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.203164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.203176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.208633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.208655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.208663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.214048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.526 [2024-10-17 19:34:40.214071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.526 [2024-10-17 19:34:40.214079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.526 [2024-10-17 19:34:40.219542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.219564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.219573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.224685] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.224708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.224717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.230518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.230540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.230549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.236125] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.236146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.236154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.241521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.241543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.241551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.246929] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.246960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.252194] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.252220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.252229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.257519] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.257542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.257551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.262683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.262705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.262713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.267938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.267960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.267969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.273394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.273416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.273425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.278899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.278922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.278931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.284269] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.284292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.284300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.289686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.289708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.289718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.295264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.295293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.295303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.300784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.300805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.300813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.527 [2024-10-17 19:34:40.306127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.527 [2024-10-17 19:34:40.306149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.527 [2024-10-17 19:34:40.306157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.787 [2024-10-17 19:34:40.311545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.787 [2024-10-17 19:34:40.311568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.787 [2024-10-17 19:34:40.311576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.787 [2024-10-17 19:34:40.317726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.787 [2024-10-17 19:34:40.317749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.787 [2024-10-17 19:34:40.317757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.787 [2024-10-17 19:34:40.325454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.787 [2024-10-17 19:34:40.325476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.787 [2024-10-17 19:34:40.325485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.787 [2024-10-17 19:34:40.332618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.787 [2024-10-17 19:34:40.332640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.787 [2024-10-17 19:34:40.332649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.787 [2024-10-17 19:34:40.338910] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.787 [2024-10-17 19:34:40.338932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.338941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.345075] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.345097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.345106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.351318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.351339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.351355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.357331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.357361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.364262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.364284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.364293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.371860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.371882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.371891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.378039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.378060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.378069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.381592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.381621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.387521] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.387543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.387551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.394737] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.394760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.394769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.402318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.402340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.402349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.408574] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.408599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.408614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.414041] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.414062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.414071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.419297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.419319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.419328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.424515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.424537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.424545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.430383] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.430405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.430414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.436297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.436319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.436328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.442682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.442704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.442712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.448887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.448910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.448919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.454572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.454595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.454610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.460855] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.460877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.460885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.466858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.466880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.466889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.473060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.473083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.473091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.479289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.479311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.479320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.485505] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.485526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.485534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.491781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.491813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.498577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.498598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.498612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.504669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.504691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.504699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.788 [2024-10-17 19:34:40.509884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.788 [2024-10-17 19:34:40.509906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.788 [2024-10-17 19:34:40.509917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.515121] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.515142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.515150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.520357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.520389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.525585] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.525614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.525624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.530841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.530862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.530871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.536914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.536936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.536944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.543315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.543339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.543348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.549446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.549468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.549476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.555494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.555516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.555525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.561669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.561691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.561699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:16.789 [2024-10-17 19:34:40.568157] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:16.789 [2024-10-17 19:34:40.568181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.789 [2024-10-17 19:34:40.568189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.574965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.574990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.574999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.581189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.581211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.581220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.587541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.587563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.587571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.593941] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.593963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.593972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.599506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.599527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.599535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.604950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.604971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.604980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.610426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.610447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.610461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.616248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.616271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.616279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.623550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.623572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.623580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.630111] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.630135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.630144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.637264] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.637286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.637295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.644402] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.644425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.644435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.651466] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.651489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.651498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.658296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.658321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.658330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.665924] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.665947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.665956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.672781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.672808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.672818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.679492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.679514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.679523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.686713] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.686737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.686745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.692802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.692825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.692834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.699888] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.699912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.699921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.708039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.708063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.708071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.714923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.714948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.714957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.721424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.721449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.721458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.726778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.732369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.732392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.732401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.738589] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.738618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.738626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.048 [2024-10-17 19:34:40.745313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.048 [2024-10-17 19:34:40.745336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.048 [2024-10-17 19:34:40.745344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.048 5641.50 IOPS, 705.19 MiB/s [2024-10-17T17:34:40.833Z] [2024-10-17 19:34:40.752564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7f1630) 00:27:17.049 [2024-10-17 19:34:40.752588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.049 [2024-10-17 19:34:40.752597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.049 00:27:17.049 Latency(us) 00:27:17.049 [2024-10-17T17:34:40.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.049 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:17.049 nvme0n1 : 2.00 5638.17 704.77 0.00 0.00 2834.53 628.05 8363.64 00:27:17.049 [2024-10-17T17:34:40.833Z] =================================================================================================================== 00:27:17.049 [2024-10-17T17:34:40.833Z] Total : 5638.17 704.77 0.00 0.00 2834.53 628.05 8363.64 00:27:17.049 { 00:27:17.049 "results": [ 00:27:17.049 { 00:27:17.049 "job": "nvme0n1", 00:27:17.049 "core_mask": "0x2", 00:27:17.049 "workload": "randread", 00:27:17.049 "status": "finished", 00:27:17.049 "queue_depth": 16, 00:27:17.049 "io_size": 131072, 00:27:17.049 "runtime": 2.004019, 00:27:17.049 "iops": 5638.170097189697, 00:27:17.049 "mibps": 704.7712621487121, 00:27:17.049 "io_failed": 0, 00:27:17.049 "io_timeout": 0, 00:27:17.049 "avg_latency_us": 2834.529394341682, 00:27:17.049 "min_latency_us": 628.0533333333333, 00:27:17.049 "max_latency_us": 8363.641904761906 00:27:17.049 } 00:27:17.049 ], 00:27:17.049 "core_count": 1 00:27:17.049 } 00:27:17.049 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:17.049 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:17.049 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:17.049 | .driver_specific 00:27:17.049 | .nvme_error 00:27:17.049 | .status_code 00:27:17.049 | .command_transient_transport_error' 00:27:17.049 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 364 > 0 )) 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2244996 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2244996 ']' 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2244996 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:17.307 19:34:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244996 00:27:17.307 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:17.307 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:17.307 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244996' 00:27:17.307 killing process with pid 2244996 00:27:17.307 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2244996 00:27:17.307 Received shutdown signal, test time was about 2.000000 seconds 00:27:17.307 00:27:17.307 Latency(us) 00:27:17.307 [2024-10-17T17:34:41.091Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.307 [2024-10-17T17:34:41.091Z] =================================================================================================================== 00:27:17.307 [2024-10-17T17:34:41.091Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:17.307 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2244996 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2245653 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2245653 /var/tmp/bperf.sock 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2245653 ']' 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:17.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:17.566 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:17.566 [2024-10-17 19:34:41.238988] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:17.566 [2024-10-17 19:34:41.239033] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2245653 ] 00:27:17.566 [2024-10-17 19:34:41.313627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.825 [2024-10-17 19:34:41.355274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:17.825 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:17.825 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:17.825 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:17.825 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.084 19:34:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:18.342 nvme0n1 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:18.342 19:34:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:18.602 Running I/O for 2 seconds... 00:27:18.602 [2024-10-17 19:34:42.185960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ee5c8 00:27:18.602 [2024-10-17 19:34:42.186725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.186752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.197405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f9f68 00:27:18.602 [2024-10-17 19:34:42.198785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.198809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.203858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e99d8 00:27:18.602 [2024-10-17 19:34:42.204436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.204456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.213131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7100 00:27:18.602 [2024-10-17 19:34:42.213691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.213712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.223388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e2c28 00:27:18.602 [2024-10-17 19:34:42.224504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.224524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.232558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e3d08 00:27:18.602 [2024-10-17 19:34:42.233228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.233249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.241670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fef90 00:27:18.602 [2024-10-17 19:34:42.242623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.250203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f3a28 00:27:18.602 [2024-10-17 19:34:42.251096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.251116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.259913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e23b8 00:27:18.602 [2024-10-17 19:34:42.260802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:11087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.260822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.268231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ea248 00:27:18.602 [2024-10-17 19:34:42.269111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.269131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.279350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd640 00:27:18.602 [2024-10-17 19:34:42.280786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.280806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.285915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fdeb0 00:27:18.602 [2024-10-17 19:34:42.286636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.286655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.295760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6300 00:27:18.602 [2024-10-17 19:34:42.296305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.296328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.304850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1b48 00:27:18.602 [2024-10-17 19:34:42.305721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.305740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.313453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eee38 00:27:18.602 [2024-10-17 19:34:42.314219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.314238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.324061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.325164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.325183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.333111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.334214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.334234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.342109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.343205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.343224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.351075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.352187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.352207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.360069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.361181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.361200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.602 [2024-10-17 19:34:42.370294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:18.602 [2024-10-17 19:34:42.371777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.602 [2024-10-17 19:34:42.371796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.603 [2024-10-17 19:34:42.376659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e4578 00:27:18.603 [2024-10-17 19:34:42.377305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.603 [2024-10-17 19:34:42.377324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.387224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaef0 00:27:18.863 [2024-10-17 19:34:42.388357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.396289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ef6a8 00:27:18.863 [2024-10-17 19:34:42.397508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.397527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.405557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e4578 00:27:18.863 [2024-10-17 19:34:42.406313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.406332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.415350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ebfd0 00:27:18.863 [2024-10-17 19:34:42.416680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.416700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.424853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7538 00:27:18.863 [2024-10-17 19:34:42.426413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.426432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.431453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6300 00:27:18.863 [2024-10-17 19:34:42.432323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.442477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e3d08 00:27:18.863 [2024-10-17 19:34:42.443736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.443756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.451277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e9168 00:27:18.863 [2024-10-17 19:34:42.452494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.452514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.460654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166edd58 00:27:18.863 [2024-10-17 19:34:42.461438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.461458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.469920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6300 00:27:18.863 [2024-10-17 19:34:42.470953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.470972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.478676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0ff8 00:27:18.863 [2024-10-17 19:34:42.479564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.479584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.488038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de038 00:27:18.863 [2024-10-17 19:34:42.488938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.488958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.497285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de038 00:27:18.863 [2024-10-17 19:34:42.498181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.498201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.506863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6fa8 00:27:18.863 [2024-10-17 19:34:42.507941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:19370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.507960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.517038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f92c0 00:27:18.863 [2024-10-17 19:34:42.518384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.518404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.525777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f9f68 00:27:18.863 [2024-10-17 19:34:42.526821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.526842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.535252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4b08 00:27:18.863 [2024-10-17 19:34:42.536322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.536344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.544698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f81e0 00:27:18.863 [2024-10-17 19:34:42.545900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.545920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.553233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de8a8 00:27:18.863 [2024-10-17 19:34:42.554022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.554041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.562694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0ea0 00:27:18.863 [2024-10-17 19:34:42.563350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.563370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.571336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaab8 00:27:18.863 [2024-10-17 19:34:42.572569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.572589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.580487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166dfdc0 00:27:18.863 [2024-10-17 19:34:42.581412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.581431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.589418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ec840 00:27:18.863 [2024-10-17 19:34:42.590333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.590352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.598357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ecc78 00:27:18.863 [2024-10-17 19:34:42.599318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.599336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.609770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e27f0 00:27:18.863 [2024-10-17 19:34:42.611327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.611345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.616095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7100 00:27:18.863 [2024-10-17 19:34:42.616892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.616917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.625063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6cc8 00:27:18.863 [2024-10-17 19:34:42.625925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.863 [2024-10-17 19:34:42.625945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.863 [2024-10-17 19:34:42.636164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5658 00:27:18.864 [2024-10-17 19:34:42.637436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.864 [2024-10-17 19:34:42.637455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.864 [2024-10-17 19:34:42.645193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:18.864 [2024-10-17 19:34:42.646426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.864 [2024-10-17 19:34:42.646444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.653186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de8a8 00:27:19.123 [2024-10-17 19:34:42.653963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:9862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.662704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fc560 00:27:19.123 [2024-10-17 19:34:42.663770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.663789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.671803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e3060 00:27:19.123 [2024-10-17 19:34:42.672908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.672927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.680394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fcdd0 00:27:19.123 [2024-10-17 19:34:42.681394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:6394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.681412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.689241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f2d80 00:27:19.123 [2024-10-17 19:34:42.690205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.690224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.698368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e3498 00:27:19.123 [2024-10-17 19:34:42.699345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.699364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.709299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fda78 00:27:19.123 [2024-10-17 19:34:42.710754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.123 [2024-10-17 19:34:42.710774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.123 [2024-10-17 19:34:42.715893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6b70 00:27:19.124 [2024-10-17 19:34:42.716634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.716653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.725327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f46d0 00:27:19.124 [2024-10-17 19:34:42.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:12608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.726212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.736111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eea00 00:27:19.124 [2024-10-17 19:34:42.737455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.737473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.745515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5658 00:27:19.124 [2024-10-17 19:34:42.746977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.746996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.751875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd208 00:27:19.124 [2024-10-17 19:34:42.752578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.752598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.760850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fcdd0 00:27:19.124 [2024-10-17 19:34:42.761612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.761631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.771857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1868 00:27:19.124 [2024-10-17 19:34:42.773028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.773047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.779155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f9f68 00:27:19.124 [2024-10-17 19:34:42.779711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.779730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.789319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7100 00:27:19.124 [2024-10-17 19:34:42.790439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.790457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.797653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:19.124 [2024-10-17 19:34:42.798593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.798619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.807080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4b08 00:27:19.124 [2024-10-17 19:34:42.807958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.807977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.816210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5a90 00:27:19.124 [2024-10-17 19:34:42.817096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.817115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.825564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6458 00:27:19.124 [2024-10-17 19:34:42.826462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.826481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.835009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd208 00:27:19.124 [2024-10-17 19:34:42.836059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.836078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.845292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de8a8 00:27:19.124 [2024-10-17 19:34:42.846776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.846795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.851659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1710 00:27:19.124 [2024-10-17 19:34:42.852295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.852317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.860819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e2c28 00:27:19.124 [2024-10-17 19:34:42.861365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.861384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.870155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f8e88 00:27:19.124 [2024-10-17 19:34:42.870933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.870952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.878741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166df988 00:27:19.124 [2024-10-17 19:34:42.879486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.879514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.888175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0bc0 00:27:19.124 [2024-10-17 19:34:42.889046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.889065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.897655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f2d80 00:27:19.124 [2024-10-17 19:34:42.898635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.124 [2024-10-17 19:34:42.898653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:19.124 [2024-10-17 19:34:42.907282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:19.384 [2024-10-17 19:34:42.908453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.908473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.916867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166df988 00:27:19.384 [2024-10-17 19:34:42.918083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.918102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.926318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0350 00:27:19.384 [2024-10-17 19:34:42.927664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.927683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.935788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ebfd0 00:27:19.384 [2024-10-17 19:34:42.937282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.937301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.942398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f2d80 00:27:19.384 [2024-10-17 19:34:42.943171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.943190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.953511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f5be8 00:27:19.384 [2024-10-17 19:34:42.954696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.954715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.962757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1f80 00:27:19.384 [2024-10-17 19:34:42.964005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.964025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.971007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f2510 00:27:19.384 [2024-10-17 19:34:42.972252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.972271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.978777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0788 00:27:19.384 [2024-10-17 19:34:42.979425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.979453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.988183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:19.384 [2024-10-17 19:34:42.988957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.988976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:42.997808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fef90 00:27:19.384 [2024-10-17 19:34:42.998694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:42.998713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:43.007309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5ec8 00:27:19.384 [2024-10-17 19:34:43.008348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.384 [2024-10-17 19:34:43.008367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.384 [2024-10-17 19:34:43.016732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1710 00:27:19.384 [2024-10-17 19:34:43.017797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.017816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.026199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:19.385 [2024-10-17 19:34:43.027457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.027476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.034514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ed4e8 00:27:19.385 [2024-10-17 19:34:43.035795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.035815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.044234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de470 00:27:19.385 [2024-10-17 19:34:43.044986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.045006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.052453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:19.385 [2024-10-17 19:34:43.053383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.053402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.061332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4b08 00:27:19.385 [2024-10-17 19:34:43.062114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.062133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.070703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e2c28 00:27:19.385 [2024-10-17 19:34:43.071255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.071274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.081115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0350 00:27:19.385 [2024-10-17 19:34:43.082484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.082502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.087629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fb8b8 00:27:19.385 [2024-10-17 19:34:43.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.088309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.097092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ef270 00:27:19.385 [2024-10-17 19:34:43.097908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.097927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.108359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaef0 00:27:19.385 [2024-10-17 19:34:43.109614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.109633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.117790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:19.385 [2024-10-17 19:34:43.119161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.119179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.127246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fac10 00:27:19.385 [2024-10-17 19:34:43.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.128743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.133566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0630 00:27:19.385 [2024-10-17 19:34:43.134159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.134178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.142969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ee5c8 00:27:19.385 [2024-10-17 19:34:43.143807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.143826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.151573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eea00 00:27:19.385 [2024-10-17 19:34:43.152359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.152379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.385 [2024-10-17 19:34:43.160715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fc998 00:27:19.385 [2024-10-17 19:34:43.161503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.385 [2024-10-17 19:34:43.161522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.170298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0630 00:27:19.645 [2024-10-17 19:34:43.171020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.171040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.180819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eb760 00:27:19.645 27947.00 IOPS, 109.17 MiB/s [2024-10-17T17:34:43.429Z] [2024-10-17 19:34:43.181420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.181439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.190210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eff18 00:27:19.645 [2024-10-17 19:34:43.190917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.190936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.198504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6458 00:27:19.645 [2024-10-17 19:34:43.199372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.199391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.207493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fb048 00:27:19.645 [2024-10-17 19:34:43.208297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.208318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.217060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e27f0 00:27:19.645 [2024-10-17 19:34:43.217643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.217663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.227897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ebfd0 00:27:19.645 [2024-10-17 19:34:43.229375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.229395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.234224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ee5c8 00:27:19.645 [2024-10-17 19:34:43.234934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.234953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.243238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e27f0 00:27:19.645 [2024-10-17 19:34:43.244022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.244041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.252392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1868 00:27:19.645 [2024-10-17 19:34:43.253191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.645 [2024-10-17 19:34:43.253211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.645 [2024-10-17 19:34:43.260956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ed920 00:27:19.645 [2024-10-17 19:34:43.261634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.261654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.271824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f46d0 00:27:19.646 [2024-10-17 19:34:43.272957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.272976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.281242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eb760 00:27:19.646 [2024-10-17 19:34:43.282510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.282528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.289445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e49b0 00:27:19.646 [2024-10-17 19:34:43.290689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.290708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.297203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f92c0 00:27:19.646 [2024-10-17 19:34:43.297897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.297915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.306711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eb328 00:27:19.646 [2024-10-17 19:34:43.307513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.316156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaab8 00:27:19.646 [2024-10-17 19:34:43.317070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.317088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.325607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd640 00:27:19.646 [2024-10-17 19:34:43.326628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.326650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.334668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e3060 00:27:19.646 [2024-10-17 19:34:43.335688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.335707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.343935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eff18 00:27:19.646 [2024-10-17 19:34:43.344876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.344895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.353335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de8a8 00:27:19.646 [2024-10-17 19:34:43.354484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.354504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.360720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1b48 00:27:19.646 [2024-10-17 19:34:43.361300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.361319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.369998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0ea0 00:27:19.646 [2024-10-17 19:34:43.370802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.370822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.378354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0a68 00:27:19.646 [2024-10-17 19:34:43.379026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.388422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ebfd0 00:27:19.646 [2024-10-17 19:34:43.389438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.389457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.397379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5658 00:27:19.646 [2024-10-17 19:34:43.398338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.398356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.407142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd640 00:27:19.646 [2024-10-17 19:34:43.408274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.408292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.416291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:19.646 [2024-10-17 19:34:43.417440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.417459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.646 [2024-10-17 19:34:43.425216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6cc8 00:27:19.646 [2024-10-17 19:34:43.426367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.646 [2024-10-17 19:34:43.426387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.434618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fb048 00:27:19.906 [2024-10-17 19:34:43.435745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.435764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.442457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe2e8 00:27:19.906 [2024-10-17 19:34:43.443044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.443063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.450755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e88f8 00:27:19.906 [2024-10-17 19:34:43.451421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.451440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.459894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaab8 00:27:19.906 [2024-10-17 19:34:43.460549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.460568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.469510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fb480 00:27:19.906 [2024-10-17 19:34:43.470174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.470193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.478445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1f80 00:27:19.906 [2024-10-17 19:34:43.479218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.479237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.487869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6cc8 00:27:19.906 [2024-10-17 19:34:43.488767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:19308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.488786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.497964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0ea0 00:27:19.906 [2024-10-17 19:34:43.498894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.498914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.507548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0ff8 00:27:19.906 [2024-10-17 19:34:43.508797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.508817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.517173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166dece0 00:27:19.906 [2024-10-17 19:34:43.518533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.518551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.526592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fe720 00:27:19.906 [2024-10-17 19:34:43.528074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.906 [2024-10-17 19:34:43.528093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.906 [2024-10-17 19:34:43.533047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fbcf0 00:27:19.907 [2024-10-17 19:34:43.533880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.533900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.544083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd640 00:27:19.907 [2024-10-17 19:34:43.545153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.545173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.551884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0630 00:27:19.907 [2024-10-17 19:34:43.552350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.552370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.561311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fdeb0 00:27:19.907 [2024-10-17 19:34:43.561916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.561939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.570641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eaab8 00:27:19.907 [2024-10-17 19:34:43.571502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.571522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.578979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:19.907 [2024-10-17 19:34:43.579800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.579819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.589803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:19.907 [2024-10-17 19:34:43.591211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.591231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.599533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e84c0 00:27:19.907 [2024-10-17 19:34:43.601024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.601043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.606021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ff3c8 00:27:19.907 [2024-10-17 19:34:43.606621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.616611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166dece0 00:27:19.907 [2024-10-17 19:34:43.617745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.617765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.625656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0350 00:27:19.907 [2024-10-17 19:34:43.626470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.634528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0350 00:27:19.907 [2024-10-17 19:34:43.635449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.635469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.642925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7538 00:27:19.907 [2024-10-17 19:34:43.643812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.643831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.651950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e5220 00:27:19.907 [2024-10-17 19:34:43.652552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.652572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.660856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:19.907 [2024-10-17 19:34:43.661430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.661450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.669745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:19.907 [2024-10-17 19:34:43.670424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.670443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.678691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:19.907 [2024-10-17 19:34:43.679354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.679373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:19.907 [2024-10-17 19:34:43.687723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:19.907 [2024-10-17 19:34:43.688388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:19.907 [2024-10-17 19:34:43.688407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.698175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6738 00:27:20.168 [2024-10-17 19:34:43.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.699262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.706553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0a68 00:27:20.168 [2024-10-17 19:34:43.707471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.707490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.715389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de470 00:27:20.168 [2024-10-17 19:34:43.716051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.716070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.725053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0350 00:27:20.168 [2024-10-17 19:34:43.725889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.725909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.734333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e4de8 00:27:20.168 [2024-10-17 19:34:43.735232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.735251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.745150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e8088 00:27:20.168 [2024-10-17 19:34:43.746506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.746526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.752322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ed4e8 00:27:20.168 [2024-10-17 19:34:43.753217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.753236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.763391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f8618 00:27:20.168 [2024-10-17 19:34:43.764703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.764722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.772356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7970 00:27:20.168 [2024-10-17 19:34:43.773656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.773674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.778807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.779393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.779412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.788376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.789061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.789080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.797349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.798077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.798100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.806328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.807050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.807069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.815369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.816052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.816071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.824358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4298 00:27:20.168 [2024-10-17 19:34:43.824963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.824982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.833323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f20d8 00:27:20.168 [2024-10-17 19:34:43.833909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.833928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.842314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f20d8 00:27:20.168 [2024-10-17 19:34:43.842984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.168 [2024-10-17 19:34:43.843004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.168 [2024-10-17 19:34:43.851285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f20d8 00:27:20.168 [2024-10-17 19:34:43.851968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.851988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.860268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f20d8 00:27:20.169 [2024-10-17 19:34:43.860968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.860987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.870427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f20d8 00:27:20.169 [2024-10-17 19:34:43.871578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.871597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.879030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e1f80 00:27:20.169 [2024-10-17 19:34:43.879881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.879901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.888097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166df988 00:27:20.169 [2024-10-17 19:34:43.888929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.888948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.897535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f4f40 00:27:20.169 [2024-10-17 19:34:43.898617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.898636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.908734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fda78 00:27:20.169 [2024-10-17 19:34:43.910282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.910301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.915101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e38d0 00:27:20.169 [2024-10-17 19:34:43.915792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.915811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.923647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f1ca0 00:27:20.169 [2024-10-17 19:34:43.924352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.924370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.932812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ef270 00:27:20.169 [2024-10-17 19:34:43.933421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.933440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.943742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e27f0 00:27:20.169 [2024-10-17 19:34:43.944933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.169 [2024-10-17 19:34:43.944952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:20.169 [2024-10-17 19:34:43.952216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ff3c8 00:27:20.429 [2024-10-17 19:34:43.953118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.953138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:43.961267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f2510 00:27:20.429 [2024-10-17 19:34:43.962004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.962024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:43.970315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0630 00:27:20.429 [2024-10-17 19:34:43.971150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.971169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:43.979804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e6b70 00:27:20.429 [2024-10-17 19:34:43.980426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.980447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:43.989020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e9e10 00:27:20.429 [2024-10-17 19:34:43.989968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.989986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:43.998017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166efae0 00:27:20.429 [2024-10-17 19:34:43.998975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:43.998994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.007056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7970 00:27:20.429 [2024-10-17 19:34:44.008043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.008062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.016118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f6020 00:27:20.429 [2024-10-17 19:34:44.017055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.017074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.025138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f0ff8 00:27:20.429 [2024-10-17 19:34:44.026085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.026104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.034175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fc128 00:27:20.429 [2024-10-17 19:34:44.035106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.035129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.043202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166de8a8 00:27:20.429 [2024-10-17 19:34:44.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.044157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.052203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e38d0 00:27:20.429 [2024-10-17 19:34:44.053142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.053161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.061242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e2c28 00:27:20.429 [2024-10-17 19:34:44.062175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.062194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.070247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fa7d8 00:27:20.429 [2024-10-17 19:34:44.071186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.071205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.079305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166eff18 00:27:20.429 [2024-10-17 19:34:44.080262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.080281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.088327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e12d8 00:27:20.429 [2024-10-17 19:34:44.089194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.089212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.097303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e4140 00:27:20.429 [2024-10-17 19:34:44.098289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.098308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.106372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd208 00:27:20.429 [2024-10-17 19:34:44.107365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.107383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.115470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f7538 00:27:20.429 [2024-10-17 19:34:44.116416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.116435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.124545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166fd640 00:27:20.429 [2024-10-17 19:34:44.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.125512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.133585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0a68 00:27:20.429 [2024-10-17 19:34:44.134522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.134541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.142538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ed0b0 00:27:20.429 [2024-10-17 19:34:44.143514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.143532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.151835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166e0ea0 00:27:20.429 [2024-10-17 19:34:44.152581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.152604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.161024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ddc00 00:27:20.429 [2024-10-17 19:34:44.162084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.162102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.170013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166ebfd0 00:27:20.429 [2024-10-17 19:34:44.171052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.429 [2024-10-17 19:34:44.171071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:20.429 [2024-10-17 19:34:44.179022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9240) with pdu=0x2000166f46d0 00:27:20.429 [2024-10-17 19:34:44.180098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:20.430 [2024-10-17 19:34:44.180117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:20.430 28079.50 IOPS, 109.69 MiB/s 00:27:20.430 Latency(us) 00:27:20.430 [2024-10-17T17:34:44.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.430 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:20.430 nvme0n1 : 2.01 28085.76 109.71 0.00 0.00 4551.91 1778.83 12545.46 00:27:20.430 [2024-10-17T17:34:44.214Z] =================================================================================================================== 00:27:20.430 [2024-10-17T17:34:44.214Z] Total : 28085.76 109.71 0.00 0.00 4551.91 1778.83 12545.46 00:27:20.430 { 00:27:20.430 "results": [ 00:27:20.430 { 00:27:20.430 "job": "nvme0n1", 00:27:20.430 "core_mask": "0x2", 00:27:20.430 "workload": "randwrite", 00:27:20.430 "status": "finished", 00:27:20.430 "queue_depth": 128, 00:27:20.430 "io_size": 4096, 00:27:20.430 "runtime": 2.006426, 00:27:20.430 "iops": 28085.76045166879, 00:27:20.430 "mibps": 109.71000176433121, 00:27:20.430 "io_failed": 0, 00:27:20.430 "io_timeout": 0, 00:27:20.430 "avg_latency_us": 4551.91356426273, 00:27:20.430 "min_latency_us": 1778.8342857142857, 00:27:20.430 "max_latency_us": 12545.462857142857 00:27:20.430 } 00:27:20.430 ], 00:27:20.430 "core_count": 1 00:27:20.430 } 00:27:20.430 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:20.692 | .driver_specific 00:27:20.692 | .nvme_error 00:27:20.692 | .status_code 00:27:20.692 | .command_transient_transport_error' 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2245653 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2245653 ']' 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2245653 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2245653 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2245653' 00:27:20.692 killing process with pid 2245653 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2245653 00:27:20.692 Received shutdown signal, test time was about 2.000000 seconds 00:27:20.692 00:27:20.692 Latency(us) 00:27:20.692 [2024-10-17T17:34:44.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:20.692 [2024-10-17T17:34:44.476Z] =================================================================================================================== 00:27:20.692 [2024-10-17T17:34:44.476Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:20.692 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2245653 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2246128 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2246128 /var/tmp/bperf.sock 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2246128 ']' 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:21.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.039 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.039 [2024-10-17 19:34:44.669404] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:21.039 [2024-10-17 19:34:44.669455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2246128 ] 00:27:21.039 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.039 Zero copy mechanism will not be used. 00:27:21.039 [2024-10-17 19:34:44.743812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.039 [2024-10-17 19:34:44.785414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.333 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:21.333 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:21.333 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.333 19:34:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.333 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:21.592 nvme0n1 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:21.851 19:34:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:21.851 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:21.851 Zero copy mechanism will not be used. 00:27:21.851 Running I/O for 2 seconds... 00:27:21.851 [2024-10-17 19:34:45.503286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.503546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.503576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.509164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.509416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.509442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.515021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.515279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.515300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.521185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.521443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.521465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.526336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.526621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.526643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.531299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.531547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.536431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.536495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.536514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.541681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.541928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.541949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.546368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.546617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.546648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.551071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.551317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.551337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.556366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.556629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.556649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.561447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.851 [2024-10-17 19:34:45.561697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.851 [2024-10-17 19:34:45.561718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.851 [2024-10-17 19:34:45.566169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.566415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.566436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.570869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.571127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.571148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.576151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.576411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.576432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.581748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.581994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.582017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.588478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.588731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.588752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.596082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.596330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.596352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.602576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.602669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.602689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.609763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.610090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.610111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.616573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.616925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.616947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.622963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.623267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.623289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:21.852 [2024-10-17 19:34:45.629609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:21.852 [2024-10-17 19:34:45.629898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:21.852 [2024-10-17 19:34:45.629919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.636482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.636819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.636841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.643210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.643510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.643531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.650317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.650563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.650585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.657168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.657450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.657471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.663855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.664116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.664137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.670822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.671123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.671144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.677944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.678190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.678212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.683410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.683630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.683650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.687522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.687742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.687764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.691626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.691847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.691868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.695753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.695972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.695993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.699888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.700111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.700132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.704056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.704276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.704297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.708236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.708455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.708475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.712407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.712630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.712651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.716552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.716777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.716799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.720885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.721111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.721132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.725307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.725552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.729764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.729988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.730010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.734687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.734905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.734927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.739389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.739615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.113 [2024-10-17 19:34:45.739652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.113 [2024-10-17 19:34:45.743825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.113 [2024-10-17 19:34:45.744046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.744068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.748265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.748489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.748510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.752673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.752905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.752925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.757126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.757344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.757365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.762132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.762360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.762381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.768392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.768705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.768727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.774950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.775170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.775192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.781251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.781516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.781543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.787787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.788034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.788055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.794497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.794798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.794819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.800757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.801045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.801066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.807155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.807436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.807457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.813131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.813451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.813473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.819468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.819711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.819732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.824391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.824617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.824638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.828630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.828880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.832843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.833069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.833089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.837019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.837239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.837260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.841202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.841418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.841439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.845374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.845593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.845620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.849506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.849750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.853607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.853828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.853848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.857753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.857973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.857993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.861932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.862153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.862173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.866064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.866284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.866305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.870202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.870418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.870439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.874326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.874546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.874567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.878476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.878698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.878718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.882546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.114 [2024-10-17 19:34:45.882771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.114 [2024-10-17 19:34:45.882792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.114 [2024-10-17 19:34:45.886691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.115 [2024-10-17 19:34:45.886907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.115 [2024-10-17 19:34:45.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.115 [2024-10-17 19:34:45.890751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.115 [2024-10-17 19:34:45.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.115 [2024-10-17 19:34:45.890991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.115 [2024-10-17 19:34:45.895049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.115 [2024-10-17 19:34:45.895267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.115 [2024-10-17 19:34:45.895289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.899146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.899365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.375 [2024-10-17 19:34:45.899387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.903372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.903587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.375 [2024-10-17 19:34:45.903617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.907542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.907763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.375 [2024-10-17 19:34:45.907784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.911708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.911925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.375 [2024-10-17 19:34:45.911946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.915796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.916016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.375 [2024-10-17 19:34:45.916036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.375 [2024-10-17 19:34:45.919915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.375 [2024-10-17 19:34:45.920132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.920152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.924034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.924253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.924274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.928147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.928365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.928386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.932189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.932405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.932425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.936240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.936460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.936480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.940331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.940564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.940585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.944477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.944700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.944720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.948798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.949015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.949035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.953314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.953532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.953553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.957446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.957670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.957691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.961805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.962023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.962044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.967376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.967699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.973066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.973285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.973306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.977743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.977980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.978000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.982264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.982482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.982503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.986953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.987171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.987192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.992733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.993014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.993035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:45.998491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:45.998734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:45.998754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.003548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.003772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.003793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.008256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.008505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.008526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.013076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.013298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.013320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.017725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.017950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.017971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.022562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.022790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.022815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.027758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.028026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.028046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.032944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.033161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.033181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.038288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.038506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.038527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.376 [2024-10-17 19:34:46.043228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.376 [2024-10-17 19:34:46.043461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.376 [2024-10-17 19:34:46.043481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.048434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.048657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.048677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.053198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.053435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.057931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.058150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.058170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.062595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.062819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.062839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.067307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.067575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.067596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.071983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.072228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.072249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.076690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.076910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.076931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.081075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.081292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.081313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.085562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.085783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.085804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.091238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.091560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.091581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.096939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.097179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.097200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.102710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.102930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.102951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.107768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.108004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.108029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.112801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.113019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.113040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.117751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.117966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.117987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.122942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.123161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.123182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.127774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.127994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.128015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.132643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.132865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.132887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.137582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.137810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.137831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.142300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.142539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.146946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.147165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.147186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.151396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.151627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.151648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.377 [2024-10-17 19:34:46.156304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.377 [2024-10-17 19:34:46.156596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.377 [2024-10-17 19:34:46.156622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.162150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.162434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.162455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.167266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.167487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.167508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.171990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.172210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.172231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.176824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.177046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.177067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.181612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.181831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.181852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.186281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.186521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.186542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.191266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.191486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.191506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.196167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.196387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.196407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.201203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.201422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.201442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.206037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.206258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.206279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.210904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.211131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.211152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.215725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.215946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.215967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.220382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.220598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.225451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.225675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.225696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.229672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.229894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.229914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.233792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.638 [2024-10-17 19:34:46.234013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.638 [2024-10-17 19:34:46.234037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.638 [2024-10-17 19:34:46.237905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.238125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.238146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.242057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.242274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.242294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.246136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.246355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.246375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.250255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.250473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.250494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.254385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.254612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.254633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.258589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.258815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.258835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.262853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.263076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.263098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.267376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.267615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.267635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.271658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.271886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.271907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.276109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.276326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.276347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.280722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.280942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.280963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.285734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.285953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.285973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.290683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.290900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.290921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.295720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.295939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.295959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.300916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.301134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.301154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.305933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.306155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.306175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.311116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.311340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.311361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.316183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.316400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.316421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.321348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.321567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.321587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.326239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.326458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.326478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.331090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.331308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.331329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.336200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.336421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.336441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.340908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.341129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.341149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.345773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.345993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.346014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.351575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.351853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.351873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.356772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.356989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.357013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.361629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.361850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.361870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.366822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.639 [2024-10-17 19:34:46.367038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.639 [2024-10-17 19:34:46.367059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.639 [2024-10-17 19:34:46.371760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.372000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.376718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.376936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.376957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.381819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.382038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.382058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.387163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.387384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.387405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.392215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.392437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.392457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.397173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.397395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.397415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.402293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.402514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.402534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.407347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.407568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.407589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.411910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.412131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.412156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.416344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.416563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.416584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.640 [2024-10-17 19:34:46.420671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.640 [2024-10-17 19:34:46.420907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.640 [2024-10-17 19:34:46.420928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.425223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.425442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.425463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.430009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.430232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.430253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.434410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.434634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.434655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.438737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.438957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.438977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.443061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.443276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.443297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.447505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.447730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.447751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.451890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.452117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.452138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.457239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.457512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.900 [2024-10-17 19:34:46.457533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.900 [2024-10-17 19:34:46.463607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.900 [2024-10-17 19:34:46.463881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.463902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.469654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.469910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.469931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.475816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.476068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.476089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.482498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.482721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.482743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.487562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.487793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.487817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 6186.00 IOPS, 773.25 MiB/s [2024-10-17T17:34:46.685Z] [2024-10-17 19:34:46.493496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.493681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.493703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.498323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.498505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.498526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.502679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.502866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.502887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.507084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.507359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.507378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.512275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.512573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.512594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.517845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.518149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.518170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.523504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.523804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.523826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.529083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.529318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.529340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.534895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.535210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.535231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.540006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.540307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.545122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.545375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.545396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.550613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.550832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.550852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.555876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.556173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.561004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.561255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.561275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.566154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.566457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.566478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.571170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.571407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.571428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.576315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.576628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.576653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.581696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.581930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.581950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.586983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.587272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.587293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.592434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.592647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.592666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.597932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.598095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.598115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.603152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.603296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.603314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.608695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.608882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.608900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.613806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.901 [2024-10-17 19:34:46.613958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.901 [2024-10-17 19:34:46.613976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.901 [2024-10-17 19:34:46.618876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.619048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.619068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.624217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.624385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.624405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.629449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.629597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.629624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.634608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.634787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.634810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.640132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.640273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.640293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.645579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.645769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.645790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.650659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.650852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.650872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.655791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.655937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.655956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.661199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.661367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.661385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.666402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.666575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.671513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.671673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.671691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.676780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.676947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.676967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.902 [2024-10-17 19:34:46.682113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:22.902 [2024-10-17 19:34:46.682276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.902 [2024-10-17 19:34:46.682297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.687865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.688001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.688021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.694723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.694896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.694917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.699750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.699803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.699822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.703696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.703760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.703779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.707544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.707616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.707635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.711904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.711973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.711995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.716660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.716742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.721114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.721189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.724969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.725030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.725050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.728807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.728866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.728885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.732645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.732699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.732717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.736607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.736664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.736682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.741023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.741077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.741095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.745447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.745502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.745521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.749358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.749426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.749444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.753347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.753401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.753420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.757195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.757261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.757280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.761187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.761242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.761260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.765121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.765192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.765210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.769031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.769084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.769103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.772968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.163 [2024-10-17 19:34:46.773039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.163 [2024-10-17 19:34:46.773058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.163 [2024-10-17 19:34:46.776921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.776977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.776996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.780841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.780909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.780928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.785314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.785390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.785408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.790639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.790799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.790819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.796446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.796567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.796587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.803447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.803583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.803610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.809773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.809967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.809987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.816061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.816186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.816207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.822707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.822900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.822921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.828869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.828972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.828995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.834790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.834900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.834923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.838936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.838993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.839012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.842809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.842863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.842882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.846731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.846796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.846815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.851223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.851306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.851326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.855138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.855243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.855263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.859106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.859165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.859183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.863065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.863138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.863157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.867233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.867345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.867365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.871142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.871223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.871241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.874957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.875039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.875060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.878715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.878771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.878790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.882837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.882946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.882966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.886903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.886956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.886974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.890888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.890980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.891000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.895703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.895780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.895798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.899891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.899945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.899963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.903768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.164 [2024-10-17 19:34:46.903829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.164 [2024-10-17 19:34:46.903847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.164 [2024-10-17 19:34:46.907563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.907632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.907650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.911395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.911466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.911485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.915374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.915429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.915448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.920257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.920312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.920330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.924309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.924368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.924385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.928190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.928243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.928263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.932004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.932066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.932086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.935895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.935947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.935966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.939727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.939794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.939816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.165 [2024-10-17 19:34:46.943536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.165 [2024-10-17 19:34:46.943591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.165 [2024-10-17 19:34:46.943616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.947452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.947521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.425 [2024-10-17 19:34:46.947540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.951264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.951316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.425 [2024-10-17 19:34:46.951334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.955067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.955131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.425 [2024-10-17 19:34:46.955149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.958870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.958938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.425 [2024-10-17 19:34:46.958957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.962656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.962708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.425 [2024-10-17 19:34:46.962726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.425 [2024-10-17 19:34:46.966502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.425 [2024-10-17 19:34:46.966552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.966569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.970322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.970386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.970404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.974124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.974249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.974270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.978018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.978086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.978105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.981880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.981937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.981956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.985666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.985729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.985747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.989576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.989647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.989665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.993393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.993521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.993542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:46.997431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:46.997489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:46.997508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.001230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.001326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.001346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.005053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.005164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.005184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.008895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.008978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.008997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.012762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.012815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.012834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.016714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.016795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.020531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.020592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.020618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.024367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.024446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.024465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.028305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.028370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.028390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.032098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.032199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.032219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.035885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.035935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.035954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.039678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.039747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.039770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.043572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.043638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.043656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.047557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.047637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.047656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.052567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.052707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.052727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.057795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.057919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.057939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.063888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.064053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.064073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.069584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.069770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.069790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.076692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.076782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.076802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.082924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.083072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.083091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.088930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.426 [2024-10-17 19:34:47.089074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.426 [2024-10-17 19:34:47.089093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.426 [2024-10-17 19:34:47.095130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.095249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.095269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.101336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.101484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.101504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.107613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.107739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.107759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.113898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.113991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.114010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.119390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.119591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.119618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.125132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.125301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.125320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.130230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.130382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.130402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.135748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.135910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.135930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.140952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.141086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.141106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.145375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.145519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.149523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.149640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.149660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.154216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.154269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.154287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.158492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.158561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.158579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.162439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.162493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.162511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.166333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.166411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.166429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.170332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.170404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.174348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.174398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.174421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.178250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.178305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.178324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.182074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.182144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.185983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.186045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.186063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.189779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.189856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.189876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.193831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.193890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.193908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.197701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.197766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.197784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.201589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.201650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.201668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.427 [2024-10-17 19:34:47.205607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.427 [2024-10-17 19:34:47.205721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.427 [2024-10-17 19:34:47.205742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.209647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.209712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.209730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.213511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.213589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.217302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.217360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.221047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.221118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.221136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.224943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.225041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.225061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.229487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.229537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.229556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.234001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.234062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.234081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.237945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.237998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.238017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.241830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.241885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.241903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.245785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.245836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.245854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.249713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.249766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.249784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.253592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.253654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.253673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.257411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.257465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.257482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.261310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.261375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.261393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.265163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.265226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.265245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.269070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.269135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.269154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.273163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.273223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.273241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.277195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.277256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.277279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.281188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.281259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.281279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.285116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.285167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.285185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.289052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.289109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.687 [2024-10-17 19:34:47.289127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.687 [2024-10-17 19:34:47.292905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.687 [2024-10-17 19:34:47.292957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.292976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.296766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.296820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.296839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.300613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.300666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.300684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.304513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.304576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.304594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.308382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.308441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.308459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.312344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.312403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.312422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.316198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.316261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.316280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.320094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.320169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.320187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.323978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.324064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.324085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.327910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.327964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.331761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.331815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.331833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.335723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.335775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.335793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.339624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.339692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.339710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.343507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.343562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.343580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.347398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.347452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.347470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.351299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.351364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.351384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.355215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.355281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.359095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.359157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.359175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.362980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.363039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.363057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.366775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.366830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.366848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.371054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.371119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.371137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.375652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.375739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.375758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.379834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.379902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.379924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.383711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.383765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.383783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.387610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.387681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.387699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.391415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.391466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.391485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.395336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.395406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.399262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.399316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.399334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.403107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.403159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.403178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.407013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.407075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.688 [2024-10-17 19:34:47.407093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.688 [2024-10-17 19:34:47.411125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.688 [2024-10-17 19:34:47.411236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.411256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.415027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.415083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.415102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.418935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.418990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.419007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.422777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.422833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.422851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.426641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.426742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.426762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.430468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.430524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.430543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.434343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.434396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.434414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.438271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.438369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.438389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.442167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.442287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.442307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.446039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.446093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.446110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.449924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.449977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.449996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.454723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.454870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.454890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.460303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.460457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.460477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.689 [2024-10-17 19:34:47.466498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.689 [2024-10-17 19:34:47.466681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.689 [2024-10-17 19:34:47.466701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.948 [2024-10-17 19:34:47.472892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.948 [2024-10-17 19:34:47.472984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.948 [2024-10-17 19:34:47.473005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.948 [2024-10-17 19:34:47.479153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.948 [2024-10-17 19:34:47.479346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.948 [2024-10-17 19:34:47.479366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:23.948 [2024-10-17 19:34:47.485461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.948 [2024-10-17 19:34:47.485581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.948 [2024-10-17 19:34:47.485606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:23.948 [2024-10-17 19:34:47.491544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11a9580) with pdu=0x2000166fef90 00:27:23.948 [2024-10-17 19:34:47.491689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.948 [2024-10-17 19:34:47.491709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:23.948 6559.50 IOPS, 819.94 MiB/s 00:27:23.948 Latency(us) 00:27:23.948 [2024-10-17T17:34:47.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.948 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:23.948 nvme0n1 : 2.00 6555.84 819.48 0.00 0.00 2436.26 1771.03 12108.56 00:27:23.948 [2024-10-17T17:34:47.732Z] =================================================================================================================== 00:27:23.948 [2024-10-17T17:34:47.732Z] Total : 6555.84 819.48 0.00 0.00 2436.26 1771.03 12108.56 00:27:23.948 { 00:27:23.948 "results": [ 00:27:23.948 { 00:27:23.948 "job": "nvme0n1", 00:27:23.948 "core_mask": "0x2", 00:27:23.948 "workload": "randwrite", 00:27:23.948 "status": "finished", 00:27:23.948 "queue_depth": 16, 00:27:23.948 "io_size": 131072, 00:27:23.948 "runtime": 2.003556, 00:27:23.948 "iops": 6555.843709883827, 00:27:23.948 "mibps": 819.4804637354783, 00:27:23.948 "io_failed": 0, 00:27:23.948 "io_timeout": 0, 00:27:23.948 "avg_latency_us": 2436.259251219026, 00:27:23.948 "min_latency_us": 1771.032380952381, 00:27:23.948 "max_latency_us": 12108.55619047619 00:27:23.948 } 00:27:23.948 ], 00:27:23.948 "core_count": 1 00:27:23.948 } 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:23.948 | .driver_specific 00:27:23.948 | .nvme_error 00:27:23.948 | .status_code 00:27:23.948 | .command_transient_transport_error' 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 423 > 0 )) 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2246128 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2246128 ']' 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2246128 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:23.948 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2246128 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2246128' 00:27:24.208 killing process with pid 2246128 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2246128 00:27:24.208 Received shutdown signal, test time was about 2.000000 seconds 00:27:24.208 00:27:24.208 Latency(us) 00:27:24.208 [2024-10-17T17:34:47.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.208 [2024-10-17T17:34:47.992Z] =================================================================================================================== 00:27:24.208 [2024-10-17T17:34:47.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2246128 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2244263 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2244263 ']' 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2244263 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2244263 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2244263' 00:27:24.208 killing process with pid 2244263 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2244263 00:27:24.208 19:34:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2244263 00:27:24.467 00:27:24.467 real 0m14.589s 00:27:24.467 user 0m27.330s 00:27:24.467 sys 0m4.716s 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:24.467 ************************************ 00:27:24.467 END TEST nvmf_digest_error 00:27:24.467 ************************************ 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:24.467 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:24.468 rmmod nvme_tcp 00:27:24.468 rmmod nvme_fabrics 00:27:24.468 rmmod nvme_keyring 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@515 -- # '[' -n 2244263 ']' 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # killprocess 2244263 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2244263 ']' 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2244263 00:27:24.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2244263) - No such process 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2244263 is not found' 00:27:24.468 Process with pid 2244263 is not found 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-save 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@789 -- # iptables-restore 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:24.468 19:34:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.004 19:34:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:27.004 00:27:27.004 real 0m36.771s 00:27:27.004 user 0m55.419s 00:27:27.004 sys 0m13.854s 00:27:27.004 19:34:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:27.004 19:34:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:27.004 ************************************ 00:27:27.005 END TEST nvmf_digest 00:27:27.005 ************************************ 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.005 ************************************ 00:27:27.005 START TEST nvmf_bdevperf 00:27:27.005 ************************************ 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:27.005 * Looking for test storage... 00:27:27.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:27.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.005 --rc genhtml_branch_coverage=1 00:27:27.005 --rc genhtml_function_coverage=1 00:27:27.005 --rc genhtml_legend=1 00:27:27.005 --rc geninfo_all_blocks=1 00:27:27.005 --rc geninfo_unexecuted_blocks=1 00:27:27.005 00:27:27.005 ' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:27.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.005 --rc genhtml_branch_coverage=1 00:27:27.005 --rc genhtml_function_coverage=1 00:27:27.005 --rc genhtml_legend=1 00:27:27.005 --rc geninfo_all_blocks=1 00:27:27.005 --rc geninfo_unexecuted_blocks=1 00:27:27.005 00:27:27.005 ' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:27.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.005 --rc genhtml_branch_coverage=1 00:27:27.005 --rc genhtml_function_coverage=1 00:27:27.005 --rc genhtml_legend=1 00:27:27.005 --rc geninfo_all_blocks=1 00:27:27.005 --rc geninfo_unexecuted_blocks=1 00:27:27.005 00:27:27.005 ' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:27.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:27.005 --rc genhtml_branch_coverage=1 00:27:27.005 --rc genhtml_function_coverage=1 00:27:27.005 --rc genhtml_legend=1 00:27:27.005 --rc geninfo_all_blocks=1 00:27:27.005 --rc geninfo_unexecuted_blocks=1 00:27:27.005 00:27:27.005 ' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:27.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:27.005 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:27.006 19:34:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:33.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:33.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.578 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:33.579 Found net devices under 0000:86:00.0: cvl_0_0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:33.579 Found net devices under 0000:86:00.1: cvl_0_1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:33.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.482 ms 00:27:33.579 00:27:33.579 --- 10.0.0.2 ping statistics --- 00:27:33.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.579 rtt min/avg/max/mdev = 0.482/0.482/0.482/0.000 ms 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:33.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:27:33.579 00:27:33.579 --- 10.0.0.1 ping statistics --- 00:27:33.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.579 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2250142 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2250142 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2250142 ']' 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:33.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 [2024-10-17 19:34:56.570064] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:33.579 [2024-10-17 19:34:56.570112] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:33.579 [2024-10-17 19:34:56.648887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:33.579 [2024-10-17 19:34:56.690979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:33.579 [2024-10-17 19:34:56.691017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:33.579 [2024-10-17 19:34:56.691024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:33.579 [2024-10-17 19:34:56.691031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:33.579 [2024-10-17 19:34:56.691036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:33.579 [2024-10-17 19:34:56.692389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.579 [2024-10-17 19:34:56.692499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.579 [2024-10-17 19:34:56.692500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.579 [2024-10-17 19:34:56.828520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:33.579 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 Malloc0 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:33.580 [2024-10-17 19:34:56.893346] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:33.580 { 00:27:33.580 "params": { 00:27:33.580 "name": "Nvme$subsystem", 00:27:33.580 "trtype": "$TEST_TRANSPORT", 00:27:33.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.580 "adrfam": "ipv4", 00:27:33.580 "trsvcid": "$NVMF_PORT", 00:27:33.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.580 "hdgst": ${hdgst:-false}, 00:27:33.580 "ddgst": ${ddgst:-false} 00:27:33.580 }, 00:27:33.580 "method": "bdev_nvme_attach_controller" 00:27:33.580 } 00:27:33.580 EOF 00:27:33.580 )") 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:33.580 19:34:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:33.580 "params": { 00:27:33.580 "name": "Nvme1", 00:27:33.580 "trtype": "tcp", 00:27:33.580 "traddr": "10.0.0.2", 00:27:33.580 "adrfam": "ipv4", 00:27:33.580 "trsvcid": "4420", 00:27:33.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.580 "hdgst": false, 00:27:33.580 "ddgst": false 00:27:33.580 }, 00:27:33.580 "method": "bdev_nvme_attach_controller" 00:27:33.580 }' 00:27:33.580 [2024-10-17 19:34:56.946151] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:33.580 [2024-10-17 19:34:56.946194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250336 ] 00:27:33.580 [2024-10-17 19:34:57.020887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.580 [2024-10-17 19:34:57.062665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.580 Running I/O for 1 seconds... 00:27:34.955 11366.00 IOPS, 44.40 MiB/s 00:27:34.955 Latency(us) 00:27:34.955 [2024-10-17T17:34:58.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:34.955 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:34.955 Verification LBA range: start 0x0 length 0x4000 00:27:34.955 Nvme1n1 : 1.01 11417.07 44.60 0.00 0.00 11168.56 947.93 10610.59 00:27:34.955 [2024-10-17T17:34:58.739Z] =================================================================================================================== 00:27:34.955 [2024-10-17T17:34:58.739Z] Total : 11417.07 44.60 0.00 0.00 11168.56 947.93 10610.59 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2250615 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:27:34.955 { 00:27:34.955 "params": { 00:27:34.955 "name": "Nvme$subsystem", 00:27:34.955 "trtype": "$TEST_TRANSPORT", 00:27:34.955 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.955 "adrfam": "ipv4", 00:27:34.955 "trsvcid": "$NVMF_PORT", 00:27:34.955 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.955 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.955 "hdgst": ${hdgst:-false}, 00:27:34.955 "ddgst": ${ddgst:-false} 00:27:34.955 }, 00:27:34.955 "method": "bdev_nvme_attach_controller" 00:27:34.955 } 00:27:34.955 EOF 00:27:34.955 )") 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:27:34.955 19:34:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:27:34.955 "params": { 00:27:34.955 "name": "Nvme1", 00:27:34.955 "trtype": "tcp", 00:27:34.955 "traddr": "10.0.0.2", 00:27:34.955 "adrfam": "ipv4", 00:27:34.955 "trsvcid": "4420", 00:27:34.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:34.955 "hdgst": false, 00:27:34.955 "ddgst": false 00:27:34.955 }, 00:27:34.955 "method": "bdev_nvme_attach_controller" 00:27:34.955 }' 00:27:34.955 [2024-10-17 19:34:58.559061] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:34.955 [2024-10-17 19:34:58.559110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2250615 ] 00:27:34.955 [2024-10-17 19:34:58.636114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.955 [2024-10-17 19:34:58.673769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.214 Running I/O for 15 seconds... 00:27:37.531 11549.00 IOPS, 45.11 MiB/s [2024-10-17T17:35:01.576Z] 11510.00 IOPS, 44.96 MiB/s [2024-10-17T17:35:01.576Z] 19:35:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2250142 00:27:37.792 19:35:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:37.792 [2024-10-17 19:35:01.527304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.792 [2024-10-17 19:35:01.527345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.792 [2024-10-17 19:35:01.527864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.792 [2024-10-17 19:35:01.527873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.527988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.527997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.793 [2024-10-17 19:35:01.528488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.793 [2024-10-17 19:35:01.528496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.793 [2024-10-17 19:35:01.528503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.528986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.528992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.794 [2024-10-17 19:35:01.529241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.794 [2024-10-17 19:35:01.529247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.795 [2024-10-17 19:35:01.529263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.795 [2024-10-17 19:35:01.529278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.795 [2024-10-17 19:35:01.529292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.795 [2024-10-17 19:35:01.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:104840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:37.795 [2024-10-17 19:35:01.529322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:37.795 [2024-10-17 19:35:01.529413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1302c60 is same with the state(6) to be set 00:27:37.795 [2024-10-17 19:35:01.529429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:37.795 [2024-10-17 19:35:01.529434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:37.795 [2024-10-17 19:35:01.529441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103936 len:8 PRP1 0x0 PRP2 0x0 00:27:37.795 [2024-10-17 19:35:01.529448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.795 [2024-10-17 19:35:01.529492] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1302c60 was disconnected and freed. reset controller. 00:27:37.795 [2024-10-17 19:35:01.532232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.795 [2024-10-17 19:35:01.532286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:37.795 [2024-10-17 19:35:01.533060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.795 [2024-10-17 19:35:01.533078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:37.795 [2024-10-17 19:35:01.533086] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:37.795 [2024-10-17 19:35:01.533260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:37.795 [2024-10-17 19:35:01.533434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.795 [2024-10-17 19:35:01.533443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.795 [2024-10-17 19:35:01.533452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.795 [2024-10-17 19:35:01.536201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.795 [2024-10-17 19:35:01.545406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.795 [2024-10-17 19:35:01.545783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.795 [2024-10-17 19:35:01.545802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:37.795 [2024-10-17 19:35:01.545810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:37.795 [2024-10-17 19:35:01.545980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:37.795 [2024-10-17 19:35:01.546149] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.795 [2024-10-17 19:35:01.546159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.795 [2024-10-17 19:35:01.546166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.795 [2024-10-17 19:35:01.548745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.795 [2024-10-17 19:35:01.558150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.795 [2024-10-17 19:35:01.558575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.795 [2024-10-17 19:35:01.558592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:37.795 [2024-10-17 19:35:01.558607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:37.795 [2024-10-17 19:35:01.558778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:37.795 [2024-10-17 19:35:01.558948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.795 [2024-10-17 19:35:01.558958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.795 [2024-10-17 19:35:01.558964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.795 [2024-10-17 19:35:01.561635] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.795 [2024-10-17 19:35:01.570975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:37.795 [2024-10-17 19:35:01.571312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:37.795 [2024-10-17 19:35:01.571328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:37.795 [2024-10-17 19:35:01.571336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:37.795 [2024-10-17 19:35:01.571503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:37.795 [2024-10-17 19:35:01.571680] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:37.795 [2024-10-17 19:35:01.571690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:37.795 [2024-10-17 19:35:01.571697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:37.795 [2024-10-17 19:35:01.574363] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.583902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.584338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.584382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.584406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.584998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.585389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.585407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.585421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.591637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.598807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.599339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.599396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.599420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.600016] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.600526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.600540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.600550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.604609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.611814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.612072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.612088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.612095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.612263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.612436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.612446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.612452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.615124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.624779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.625230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.625274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.625298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.625892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.626425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.626434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.626440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.628956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.637569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.637989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.638005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.638013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.638172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.638331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.638340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.638346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.640874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.650393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.650799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.650816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.650823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.650982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.651141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.651151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.651157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.653783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.663170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.663584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.663605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.663613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.663773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.663933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.663943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.663949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.666470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.675992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.676409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.676426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.056 [2024-10-17 19:35:01.676432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.056 [2024-10-17 19:35:01.676593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.056 [2024-10-17 19:35:01.676759] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.056 [2024-10-17 19:35:01.676769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.056 [2024-10-17 19:35:01.676775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.056 [2024-10-17 19:35:01.679296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.056 [2024-10-17 19:35:01.688802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.056 [2024-10-17 19:35:01.689230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.056 [2024-10-17 19:35:01.689274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.689298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.689893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.690479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.690510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.690517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.693043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.701524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.701866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.701882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.701892] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.702051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.702211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.702220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.702226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.704756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.714264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.714683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.714728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.714752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.715332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.715531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.715540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.715547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.718073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.726995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.727319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.727335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.727342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.727501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.727665] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.727674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.727680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.730198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.739800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.740131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.740147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.740154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.740312] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.740474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.740483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.740489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.743012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.752536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.752915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.752931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.752939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.753098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.753258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.753267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.753273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.755799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.765334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.765773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.765818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.765841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.766421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.766666] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.766677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.766683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.769201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.778124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.778399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.778415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.778422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.778581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.778746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.778756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.778762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.781393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.791091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.791515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.791532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.791540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.791728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.791916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.791926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.791933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.794641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.804081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.804442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.804458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.804467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.804642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.804811] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.804820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.804827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.807557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.057 [2024-10-17 19:35:01.817044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.057 [2024-10-17 19:35:01.817397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.057 [2024-10-17 19:35:01.817440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.057 [2024-10-17 19:35:01.817464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.057 [2024-10-17 19:35:01.818041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.057 [2024-10-17 19:35:01.818432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.057 [2024-10-17 19:35:01.818451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.057 [2024-10-17 19:35:01.818465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.057 [2024-10-17 19:35:01.824697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.058 [2024-10-17 19:35:01.831802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.058 [2024-10-17 19:35:01.832314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.058 [2024-10-17 19:35:01.832359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.058 [2024-10-17 19:35:01.832392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.058 [2024-10-17 19:35:01.832962] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.058 [2024-10-17 19:35:01.833218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.058 [2024-10-17 19:35:01.833231] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.058 [2024-10-17 19:35:01.833241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.058 [2024-10-17 19:35:01.837298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.318 [2024-10-17 19:35:01.844754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.318 [2024-10-17 19:35:01.845196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.845213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.845221] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.845389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.845556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.845566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.845573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.848236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.857612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.858036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.858078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.858102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.858633] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.859022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.859040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.859055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.865281] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.872559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.873007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.873029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.873040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.873294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.873550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.873567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.873577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.877639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.885578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.885936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.885953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.885960] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.886133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.886305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.886315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.886322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.889078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.898300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.898711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.898728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.898736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.898894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.899054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.899063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.899069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.901596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.911125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.911471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.911487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.911494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.911659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.911820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.911829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.911835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.914355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.923990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.924418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.924455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.924481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.925010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.925171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.925180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.925186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.927803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.936925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.937269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.937286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.937293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.937454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.937617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.937627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.937634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.940145] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.949668] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.950059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.950075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.950082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.950241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.950400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.950410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.950417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.953049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 [2024-10-17 19:35:01.962519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.319 [2024-10-17 19:35:01.962945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.319 [2024-10-17 19:35:01.962990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.319 [2024-10-17 19:35:01.963014] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.319 [2024-10-17 19:35:01.963615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.319 [2024-10-17 19:35:01.964183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.319 [2024-10-17 19:35:01.964192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.319 [2024-10-17 19:35:01.964199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.319 [2024-10-17 19:35:01.966869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.319 9787.33 IOPS, 38.23 MiB/s [2024-10-17T17:35:02.104Z] [2024-10-17 19:35:01.975330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:01.975730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:01.975776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:01.975800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:01.976068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:01.976238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:01.976247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:01.976254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:01.978924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:01.988193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:01.988583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:01.988608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:01.988616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:01.988775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:01.988936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:01.988945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:01.988951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:01.991476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.001001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.001343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.001359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.001367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.001526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.001691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.001701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.001711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.004234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.013761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.014194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.014211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.014218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.014377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.014537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.014547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.014553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.017081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.026595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.026939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.026955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.026962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.027122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.027281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.027290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.027296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.029824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.039597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.040007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.040023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.040031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.040219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.040392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.040402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.040409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.043114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.052315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.052731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.052747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.052754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.052912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.053071] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.053080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.053086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.055612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.065150] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.065564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.065621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.065647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.066224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.066385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.066394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.066401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.069019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.077946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.078372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.078416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.078440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.079035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.079523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.079532] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.079538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.082061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.320 [2024-10-17 19:35:02.090674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.320 [2024-10-17 19:35:02.091086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.320 [2024-10-17 19:35:02.091102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.320 [2024-10-17 19:35:02.091109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.320 [2024-10-17 19:35:02.091268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.320 [2024-10-17 19:35:02.091430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.320 [2024-10-17 19:35:02.091440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.320 [2024-10-17 19:35:02.091445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.320 [2024-10-17 19:35:02.093969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.580 [2024-10-17 19:35:02.103587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.580 [2024-10-17 19:35:02.103946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.580 [2024-10-17 19:35:02.103963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.580 [2024-10-17 19:35:02.103970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.580 [2024-10-17 19:35:02.104138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.580 [2024-10-17 19:35:02.104307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.580 [2024-10-17 19:35:02.104316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.580 [2024-10-17 19:35:02.104322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.580 [2024-10-17 19:35:02.106879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.580 [2024-10-17 19:35:02.116404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.580 [2024-10-17 19:35:02.116827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.580 [2024-10-17 19:35:02.116882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.116906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.117408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.117569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.117578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.117585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.120110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.129264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.129686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.129732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.129756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.130336] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.130587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.130596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.130608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.133133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.142001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.142355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.142398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.142422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.142874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.143036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.143046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.143052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.145665] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.154737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.155133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.155149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.155156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.155315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.155475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.155485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.155491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.158025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.167551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.167968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.168021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.168046] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.168559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.168726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.168736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.168742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.171265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.180405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.180745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.180767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.180774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.180934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.181094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.181103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.181110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.183639] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.193155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.193581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.193634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.193658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.194032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.194192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.194201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.194207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.196733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.205952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.206364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.206380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.206388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.206546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.206712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.206722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.206728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.209249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.218781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.219129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.219145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.219152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.219311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.219473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.219482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.219488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.222013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.231572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.231879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.231895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.231903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.232063] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.232223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.232232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.232238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.234765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.581 [2024-10-17 19:35:02.244294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.581 [2024-10-17 19:35:02.244694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.581 [2024-10-17 19:35:02.244741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.581 [2024-10-17 19:35:02.244764] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.581 [2024-10-17 19:35:02.245182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.581 [2024-10-17 19:35:02.245343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.581 [2024-10-17 19:35:02.245352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.581 [2024-10-17 19:35:02.245358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.581 [2024-10-17 19:35:02.247882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.257109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.257450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.257466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.257473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.257639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.257800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.257809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.257815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.260345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.269884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.270293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.270310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.270317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.270477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.270641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.270651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.270657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.273173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.282724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.283029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.283045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.283052] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.283211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.283370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.283379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.283386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.286059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.295748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.296095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.296113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.296121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.296293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.296466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.296476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.296482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.299170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.308604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.308904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.308922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.308933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.309100] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.309268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.309279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.309289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.311854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.321387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.321669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.321687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.321695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.321854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.322013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.322022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.322029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.324647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.334173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.334498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.334514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.334522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.334684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.334845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.334854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.334861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.337534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.347230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.347632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.347650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.347658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.347830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.348004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.348016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.348024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.350775] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.582 [2024-10-17 19:35:02.360038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.582 [2024-10-17 19:35:02.360315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.582 [2024-10-17 19:35:02.360333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.582 [2024-10-17 19:35:02.360340] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.582 [2024-10-17 19:35:02.360508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.582 [2024-10-17 19:35:02.360682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.582 [2024-10-17 19:35:02.360692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.582 [2024-10-17 19:35:02.360699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.582 [2024-10-17 19:35:02.363415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.842 [2024-10-17 19:35:02.373132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.842 [2024-10-17 19:35:02.373535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.373552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.373561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.373737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.373910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.373920] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.373927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.376672] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.386236] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.386589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.386610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.386618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.386791] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.386965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.386974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.386981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.389731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.399370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.399782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.399800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.399809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.399993] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.400179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.400190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.400199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.403117] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.412578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.412920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.412938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.412946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.413130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.413315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.413325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.413332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.416323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.425822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.426261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.426279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.426287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.426470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.426660] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.426670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.426678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.429583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.439012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.439432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.439450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.439459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.439652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.439836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.439846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.439853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.442769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.452274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.452614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.452632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.452640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.452823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.453008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.453018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.453025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.455947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.465437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.465883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.465900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.465909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.466092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.466275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.466285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.466292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.469208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.478649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.479023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.479041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.479050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.479232] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.479417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.479427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.479438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.482359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.491661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.492060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.492076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.492084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.492257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.492430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.843 [2024-10-17 19:35:02.492440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.843 [2024-10-17 19:35:02.492446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.843 [2024-10-17 19:35:02.495224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.843 [2024-10-17 19:35:02.504626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.843 [2024-10-17 19:35:02.505050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.843 [2024-10-17 19:35:02.505068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.843 [2024-10-17 19:35:02.505076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.843 [2024-10-17 19:35:02.505248] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.843 [2024-10-17 19:35:02.505421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.505431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.505438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.508184] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.517684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.518074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.518092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.518100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.518269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.518438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.518448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.518455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.521118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.530620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.530962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.530983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.530992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.531160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.531329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.531338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.531344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.533910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.543694] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.544084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.544102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.544110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.544284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.544457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.544466] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.544473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.547186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.556461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.556896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.556914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.556921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.557079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.557238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.557248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.557254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.559790] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.569178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.569547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.569564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.569572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.569735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.569899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.569908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.569915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.572435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.581902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.582171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.582187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.582195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.582354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.582513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.582523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.582529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.585059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.594695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.594967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.594983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.594990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.595148] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.595308] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.595317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.595324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.597853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.607538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.607945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.607991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.608015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.608586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.608753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.608762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.608769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.611298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.844 [2024-10-17 19:35:02.620381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:38.844 [2024-10-17 19:35:02.620752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:38.844 [2024-10-17 19:35:02.620770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:38.844 [2024-10-17 19:35:02.620778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:38.844 [2024-10-17 19:35:02.620937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:38.844 [2024-10-17 19:35:02.621097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:38.844 [2024-10-17 19:35:02.621106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:38.844 [2024-10-17 19:35:02.621112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:38.844 [2024-10-17 19:35:02.623782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.633377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.633724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.633752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.633760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.633920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.634080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.634089] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.634095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.636729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.646178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.646588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.646641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.646668] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.647177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.647337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.647346] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.647352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.649883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.658977] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.659411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.659455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.659487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.659959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.660127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.660136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.660143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.662666] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.671754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.672084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.672100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.672107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.672266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.672426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.672435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.672441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.675020] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.684597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.684936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.684980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.685003] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.685583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.686055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.686065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.686071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.688588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.697465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.697782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.697798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.697806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.697964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.105 [2024-10-17 19:35:02.698124] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.105 [2024-10-17 19:35:02.698136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.105 [2024-10-17 19:35:02.698142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.105 [2024-10-17 19:35:02.700674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.105 [2024-10-17 19:35:02.710198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.105 [2024-10-17 19:35:02.710588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.105 [2024-10-17 19:35:02.710610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.105 [2024-10-17 19:35:02.710618] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.105 [2024-10-17 19:35:02.710778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.710938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.710947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.710954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.713475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.722994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.723409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.723465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.723489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.724048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.724219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.724228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.724235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.726848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.735767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.736193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.736236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.736260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.736853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.737301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.737310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.737316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.739830] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.748651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.749070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.749087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.749094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.749252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.749412] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.749422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.749428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.751957] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.761476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.761809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.761825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.761832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.761991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.762151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.762160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.762167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.764706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.774222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.774636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.774653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.774660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.774843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.775011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.775021] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.775027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.777612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.786979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.787264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.787280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.787290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.787450] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.787616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.787626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.787648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.790317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.799854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.800210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.800227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.800235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.800402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.800571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.800580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.800587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.803257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.812916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.813309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.813326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.813333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.813501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.813673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.813684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.813690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.816352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.825742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.826171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.826187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.826194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.826354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.826513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.826528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.826535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.829064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.838736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.106 [2024-10-17 19:35:02.839145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.106 [2024-10-17 19:35:02.839161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.106 [2024-10-17 19:35:02.839168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.106 [2024-10-17 19:35:02.839327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.106 [2024-10-17 19:35:02.839487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.106 [2024-10-17 19:35:02.839496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.106 [2024-10-17 19:35:02.839503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.106 [2024-10-17 19:35:02.842031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.106 [2024-10-17 19:35:02.851537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.107 [2024-10-17 19:35:02.851886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.107 [2024-10-17 19:35:02.851903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.107 [2024-10-17 19:35:02.851911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.107 [2024-10-17 19:35:02.852070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.107 [2024-10-17 19:35:02.852229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.107 [2024-10-17 19:35:02.852239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.107 [2024-10-17 19:35:02.852245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.107 [2024-10-17 19:35:02.854772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.107 [2024-10-17 19:35:02.864300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.107 [2024-10-17 19:35:02.864707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.107 [2024-10-17 19:35:02.864724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.107 [2024-10-17 19:35:02.864732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.107 [2024-10-17 19:35:02.864892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.107 [2024-10-17 19:35:02.865052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.107 [2024-10-17 19:35:02.865061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.107 [2024-10-17 19:35:02.865067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.107 [2024-10-17 19:35:02.867589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.107 [2024-10-17 19:35:02.877173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.107 [2024-10-17 19:35:02.877586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.107 [2024-10-17 19:35:02.877642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.107 [2024-10-17 19:35:02.877667] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.107 [2024-10-17 19:35:02.878161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.107 [2024-10-17 19:35:02.878321] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.107 [2024-10-17 19:35:02.878329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.107 [2024-10-17 19:35:02.878335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.107 [2024-10-17 19:35:02.880858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.890172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.890596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.890619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.890627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.890795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.890973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.890982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.890989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.893558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.903055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.903388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.903404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.903412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.903570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.903737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.903747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.903753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.906275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.915787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.916195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.916230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.916255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.916859] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.917442] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.917467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.917499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.920011] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.928620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.929034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.929078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.929101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.929536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.929702] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.929710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.929717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.932238] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.941414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.941829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.941846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.941853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.942012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.942173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.942182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.942188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.944850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.954184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.954619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.954664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.954687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.955267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.955701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.955710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.955720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.958242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.967031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.967441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.967458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.967465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.967630] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.967790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.967800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.967806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.970331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 7340.50 IOPS, 28.67 MiB/s [2024-10-17T17:35:03.152Z] [2024-10-17 19:35:02.979767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.368 [2024-10-17 19:35:02.980155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.368 [2024-10-17 19:35:02.980172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.368 [2024-10-17 19:35:02.980180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.368 [2024-10-17 19:35:02.980338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.368 [2024-10-17 19:35:02.980497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.368 [2024-10-17 19:35:02.980507] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.368 [2024-10-17 19:35:02.980513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.368 [2024-10-17 19:35:02.983039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.368 [2024-10-17 19:35:02.992733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:02.993161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:02.993205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:02.993229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:02.993742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:02.993930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:02.993938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:02.993945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:02.996731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.005520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.005911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.005927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.005935] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.006094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.006253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.006262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.006269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.008800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.018322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.018664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.018681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.018689] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.018849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.019008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.019018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.019025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.021552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.031103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.031499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.031542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.031567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.032160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.032735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.032744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.032751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.035273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.044073] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.044441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.044458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.044466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.044657] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.044835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.044844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.044851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.047609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.057160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.057593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.057616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.057625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.057799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.057972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.057982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.057989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.060748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.069932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.070286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.070303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.070310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.070468] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.070632] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.070641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.070648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.073158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.082770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.083176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.083193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.083200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.083359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.083518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.083527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.083534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.086151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.095617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.096032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.096049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.096057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.096216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.096374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.096384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.096390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.098915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.108492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.369 [2024-10-17 19:35:03.108926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.369 [2024-10-17 19:35:03.108970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.369 [2024-10-17 19:35:03.108994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.369 [2024-10-17 19:35:03.109573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.369 [2024-10-17 19:35:03.109856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.369 [2024-10-17 19:35:03.109865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.369 [2024-10-17 19:35:03.109872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.369 [2024-10-17 19:35:03.112392] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.369 [2024-10-17 19:35:03.121316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.370 [2024-10-17 19:35:03.121655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-10-17 19:35:03.121672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.370 [2024-10-17 19:35:03.121679] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.370 [2024-10-17 19:35:03.121839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.370 [2024-10-17 19:35:03.121998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.370 [2024-10-17 19:35:03.122008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.370 [2024-10-17 19:35:03.122014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.370 [2024-10-17 19:35:03.124556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.370 [2024-10-17 19:35:03.134042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.370 [2024-10-17 19:35:03.134436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-10-17 19:35:03.134452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.370 [2024-10-17 19:35:03.134462] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.370 [2024-10-17 19:35:03.134628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.370 [2024-10-17 19:35:03.134788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.370 [2024-10-17 19:35:03.134797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.370 [2024-10-17 19:35:03.134804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.370 [2024-10-17 19:35:03.137419] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.370 [2024-10-17 19:35:03.146837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.370 [2024-10-17 19:35:03.147244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.370 [2024-10-17 19:35:03.147288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.370 [2024-10-17 19:35:03.147312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.370 [2024-10-17 19:35:03.147878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.370 [2024-10-17 19:35:03.148048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.370 [2024-10-17 19:35:03.148057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.370 [2024-10-17 19:35:03.148064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.370 [2024-10-17 19:35:03.150734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.159744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.160167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.160211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.160235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.160830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.161015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.161025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.161031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.163570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.172514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.172854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.172871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.172878] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.173036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.173194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.173206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.173213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.175731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.185243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.185583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.185606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.185614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.185773] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.185933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.185942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.185948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.188468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.197990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.198374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.198391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.198399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.198558] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.198724] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.198734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.198741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.201260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.210775] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.211186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.211227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.211252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.211847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.212125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.212134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.212140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.214661] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.223585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.224006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.224050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.224073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.224508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.224690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.224698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.224705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.227347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.236406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.236754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.236788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.631 [2024-10-17 19:35:03.236796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.631 [2024-10-17 19:35:03.236964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.631 [2024-10-17 19:35:03.237132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.631 [2024-10-17 19:35:03.237142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.631 [2024-10-17 19:35:03.237149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.631 [2024-10-17 19:35:03.239770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.631 [2024-10-17 19:35:03.249145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.631 [2024-10-17 19:35:03.249562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.631 [2024-10-17 19:35:03.249619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.249644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.250130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.250292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.250301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.250307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.252832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.261933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.262300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.262317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.262329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.262489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.262661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.262670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.262677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.265193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.274716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.275044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.275060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.275068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.275226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.275385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.275395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.275401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.277926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.287452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.287856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.287872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.287880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.288039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.288199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.288209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.288215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.290744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.300269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.300704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.300722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.300730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.300898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.301066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.301079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.301086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.303817] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.313203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.313555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.313571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.313578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.313752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.313921] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.313930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.313937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.316606] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.326219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.326638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.326682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.326707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.327280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.327458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.327468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.327474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.330005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.632 [2024-10-17 19:35:03.339163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.632 [2024-10-17 19:35:03.339514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.632 [2024-10-17 19:35:03.339531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.632 [2024-10-17 19:35:03.339538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.632 [2024-10-17 19:35:03.339703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.632 [2024-10-17 19:35:03.339864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.632 [2024-10-17 19:35:03.339873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.632 [2024-10-17 19:35:03.339880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.632 [2024-10-17 19:35:03.342404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.633 [2024-10-17 19:35:03.351930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.633 [2024-10-17 19:35:03.352321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.633 [2024-10-17 19:35:03.352337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.633 [2024-10-17 19:35:03.352344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.633 [2024-10-17 19:35:03.352502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.633 [2024-10-17 19:35:03.352667] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.633 [2024-10-17 19:35:03.352677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.633 [2024-10-17 19:35:03.352684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.633 [2024-10-17 19:35:03.355190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.633 [2024-10-17 19:35:03.364718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.633 [2024-10-17 19:35:03.365078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.633 [2024-10-17 19:35:03.365123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.633 [2024-10-17 19:35:03.365146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.633 [2024-10-17 19:35:03.365611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.633 [2024-10-17 19:35:03.365774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.633 [2024-10-17 19:35:03.365784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.633 [2024-10-17 19:35:03.365790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.633 [2024-10-17 19:35:03.368312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.633 [2024-10-17 19:35:03.377625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.633 [2024-10-17 19:35:03.377988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.633 [2024-10-17 19:35:03.378004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.633 [2024-10-17 19:35:03.378012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.633 [2024-10-17 19:35:03.378171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.633 [2024-10-17 19:35:03.378330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.633 [2024-10-17 19:35:03.378339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.633 [2024-10-17 19:35:03.378346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.633 [2024-10-17 19:35:03.380872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.633 [2024-10-17 19:35:03.390378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.633 [2024-10-17 19:35:03.390791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.633 [2024-10-17 19:35:03.390808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.633 [2024-10-17 19:35:03.390815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.633 [2024-10-17 19:35:03.390976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.633 [2024-10-17 19:35:03.391135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.633 [2024-10-17 19:35:03.391144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.633 [2024-10-17 19:35:03.391150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.633 [2024-10-17 19:35:03.393677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.633 [2024-10-17 19:35:03.403184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.633 [2024-10-17 19:35:03.403614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.633 [2024-10-17 19:35:03.403659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.633 [2024-10-17 19:35:03.403682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.633 [2024-10-17 19:35:03.404263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.633 [2024-10-17 19:35:03.404733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.633 [2024-10-17 19:35:03.404743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.633 [2024-10-17 19:35:03.404749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.633 [2024-10-17 19:35:03.407269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.416154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.416578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.416634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.416659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.417239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.417767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.417775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.417782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.420384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.429017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.429429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.429445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.429453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.429618] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.429779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.429789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.429799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.432323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.441824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.442188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.442204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.442212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.442370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.442530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.442540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.442546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.445076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.454599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.455004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.455020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.455027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.455186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.455346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.455355] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.455361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.457887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.467410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.467880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.467925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.467949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.468528] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.468953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.468962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.468968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.471489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.480214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.480626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.480650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.480657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.480817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.480977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.480986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.480993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.483518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.493038] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.493439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.493482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.493505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.493998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.494158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.494165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.494171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.496680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.505746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.506083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.506099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.506106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.506264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.506424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.506433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.506440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.894 [2024-10-17 19:35:03.508970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.894 [2024-10-17 19:35:03.518675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.894 [2024-10-17 19:35:03.519018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.894 [2024-10-17 19:35:03.519067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.894 [2024-10-17 19:35:03.519091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.894 [2024-10-17 19:35:03.519688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.894 [2024-10-17 19:35:03.519906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.894 [2024-10-17 19:35:03.519915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.894 [2024-10-17 19:35:03.519922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.522445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.531499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.531920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.531936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.531944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.532103] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.532263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.532271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.532277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.534811] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.544341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.544692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.544709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.544717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.544877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.545036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.545046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.545052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.547569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.557094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.557523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.557540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.557547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.557732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.557901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.557910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.557917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.560583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.570151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.570473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.570490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.570497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.570673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.570842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.570852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.570859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.573525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.583130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.583549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.583587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.583629] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.584160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.584330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.584339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.584345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.586946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.595898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.596328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.596372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.596396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.596907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.597079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.597088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.597094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.599613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.608679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.609095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.609111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.609122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.609281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.609440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.609449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.609456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.611975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.621487] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.621878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.621894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.621901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.622060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.622219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.622228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.622234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.624757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.634205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.634511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.634527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.634534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.634699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.634860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.634869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.634875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.895 [2024-10-17 19:35:03.637412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.895 [2024-10-17 19:35:03.647183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.895 [2024-10-17 19:35:03.647527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.895 [2024-10-17 19:35:03.647542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.895 [2024-10-17 19:35:03.647550] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.895 [2024-10-17 19:35:03.647723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.895 [2024-10-17 19:35:03.647892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.895 [2024-10-17 19:35:03.647904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.895 [2024-10-17 19:35:03.647911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.896 [2024-10-17 19:35:03.650540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.896 [2024-10-17 19:35:03.659899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.896 [2024-10-17 19:35:03.660308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.896 [2024-10-17 19:35:03.660322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.896 [2024-10-17 19:35:03.660330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.896 [2024-10-17 19:35:03.660490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.896 [2024-10-17 19:35:03.660655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.896 [2024-10-17 19:35:03.660662] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.896 [2024-10-17 19:35:03.660670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.896 [2024-10-17 19:35:03.663206] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.896 [2024-10-17 19:35:03.672771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.896 [2024-10-17 19:35:03.673171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.896 [2024-10-17 19:35:03.673187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:39.896 [2024-10-17 19:35:03.673195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:39.896 [2024-10-17 19:35:03.673364] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:39.896 [2024-10-17 19:35:03.673531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.896 [2024-10-17 19:35:03.673539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.896 [2024-10-17 19:35:03.673547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.896 [2024-10-17 19:35:03.676209] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.156 [2024-10-17 19:35:03.685695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.156 [2024-10-17 19:35:03.686107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.156 [2024-10-17 19:35:03.686122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.156 [2024-10-17 19:35:03.686131] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.156 [2024-10-17 19:35:03.686292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.156 [2024-10-17 19:35:03.686452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.156 [2024-10-17 19:35:03.686460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.156 [2024-10-17 19:35:03.686467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.156 [2024-10-17 19:35:03.688992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.156 [2024-10-17 19:35:03.698512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.156 [2024-10-17 19:35:03.698822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.156 [2024-10-17 19:35:03.698838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.156 [2024-10-17 19:35:03.698847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.156 [2024-10-17 19:35:03.699006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.156 [2024-10-17 19:35:03.699165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.156 [2024-10-17 19:35:03.699173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.156 [2024-10-17 19:35:03.699181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.156 [2024-10-17 19:35:03.701714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.156 [2024-10-17 19:35:03.711365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.156 [2024-10-17 19:35:03.711772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.156 [2024-10-17 19:35:03.711811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.156 [2024-10-17 19:35:03.711838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.156 [2024-10-17 19:35:03.712419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.156 [2024-10-17 19:35:03.712618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.156 [2024-10-17 19:35:03.712627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.156 [2024-10-17 19:35:03.712634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.156 [2024-10-17 19:35:03.715153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.724089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.724478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.724493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.724502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.724666] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.724828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.724835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.724843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.727474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.736863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.737278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.737293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.737301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.737479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.737672] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.737682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.737690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.740317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.749700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.750003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.750018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.750026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.750185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.750344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.750351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.750358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.752886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.762415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.762811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.762828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.762836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.762996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.763156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.763164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.763171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.765707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.775390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.775730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.775747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.775756] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.775930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.776104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.776113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.776124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.778883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.788231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.788639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.788656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.788664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.788839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.788999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.789007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.789015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.791538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.801070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.801388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.801403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.801411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.801571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.801735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.801744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.801751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.804277] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.813898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.814226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.814267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.814292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.814736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.814896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.814904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.814911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.817427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.826753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.827022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.827037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.827045] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.827203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.827362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.827370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.827378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.829997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.839704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.840098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.840113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.840121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.840280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.840440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.840448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.840456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.157 [2024-10-17 19:35:03.842983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.157 [2024-10-17 19:35:03.852511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.157 [2024-10-17 19:35:03.852759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.157 [2024-10-17 19:35:03.852775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.157 [2024-10-17 19:35:03.852784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.157 [2024-10-17 19:35:03.852943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.157 [2024-10-17 19:35:03.853103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.157 [2024-10-17 19:35:03.853111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.157 [2024-10-17 19:35:03.853118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.855648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.865342] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.865734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.865751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.865759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.865924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.866084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.866095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.866102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.868631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.878196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.878489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.878505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.878513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.878688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.878857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.878866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.878874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.881413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.890937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.891206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.891221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.891230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.891389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.891549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.891557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.891564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.894083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.903755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.904028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.904043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.904051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.904211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.904371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.904379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.904389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.906920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.916594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.916996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.917042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.917068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.917662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.918155] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.918164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.918172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.920697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.158 [2024-10-17 19:35:03.929674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.158 [2024-10-17 19:35:03.929958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.158 [2024-10-17 19:35:03.929973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.158 [2024-10-17 19:35:03.929982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.158 [2024-10-17 19:35:03.930155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.158 [2024-10-17 19:35:03.930328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.158 [2024-10-17 19:35:03.930336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.158 [2024-10-17 19:35:03.930344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.158 [2024-10-17 19:35:03.933091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.418 [2024-10-17 19:35:03.942687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.418 [2024-10-17 19:35:03.942958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:03.942973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:03.942983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:03.943157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:03.943330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:03.943339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:03.943347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:03.946225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:03.955825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:03.956172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:03.956191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:03.956199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:03.956382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:03.956565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:03.956572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:03.956581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:03.959502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:03.968921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:03.969262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:03.969278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:03.969287] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:03.969460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:03.969639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:03.969647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:03.969655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:03.972402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 5872.40 IOPS, 22.94 MiB/s [2024-10-17T17:35:04.203Z] [2024-10-17 19:35:03.981924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:03.982270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:03.982286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:03.982294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:03.982466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:03.982646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:03.982654] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:03.982661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:03.985471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:03.995163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:03.995452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:03.995470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:03.995479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:03.995680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:03.995881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:03.995891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:03.995899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:03.998887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:04.008374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:04.008802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:04.008819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:04.008828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:04.009001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:04.009175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:04.009184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:04.009190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:04.012036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:04.021392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:04.021813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:04.021831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:04.021840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:04.022024] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:04.022208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:04.022218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:04.022225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:04.025142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:04.034389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:04.034820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:04.034838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:04.034846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:04.035018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:04.035191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:04.035201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:04.035208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:04.038068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:04.047528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:04.047976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:04.047993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:04.048001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:04.048184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:04.048368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.419 [2024-10-17 19:35:04.048378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.419 [2024-10-17 19:35:04.048386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.419 [2024-10-17 19:35:04.051243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.419 [2024-10-17 19:35:04.060698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.419 [2024-10-17 19:35:04.061138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.419 [2024-10-17 19:35:04.061155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.419 [2024-10-17 19:35:04.061164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.419 [2024-10-17 19:35:04.061347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.419 [2024-10-17 19:35:04.061532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.061543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.061550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.064484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.073829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.074185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.074203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.074211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.074404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.074590] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.074605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.074614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.077378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.086873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.087207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.087224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.087236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.087410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.087584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.087594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.087608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.090351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.099989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.100341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.100358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.100366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.100539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.100719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.100729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.100737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.103660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.113136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.113573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.113590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.113599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.113790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.113973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.113983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.113990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.116909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.126383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.126818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.126837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.126845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.127040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.127226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.127242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.127249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.130170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.139430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.139838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.139855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.139865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.140040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.140214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.140224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.140230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.142979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.152536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.152965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.153008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.153033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.153571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.153751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.153761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.153768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.156494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.165508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.165889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.165906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.165914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.166081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.166249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.166258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.166265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.168936] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.178286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.178677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.178695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.178703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.420 [2024-10-17 19:35:04.178870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.420 [2024-10-17 19:35:04.179040] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.420 [2024-10-17 19:35:04.179049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.420 [2024-10-17 19:35:04.179056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.420 [2024-10-17 19:35:04.181626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.420 [2024-10-17 19:35:04.190994] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.420 [2024-10-17 19:35:04.191382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.420 [2024-10-17 19:35:04.191398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.420 [2024-10-17 19:35:04.191405] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.421 [2024-10-17 19:35:04.191563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.421 [2024-10-17 19:35:04.191728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.421 [2024-10-17 19:35:04.191738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.421 [2024-10-17 19:35:04.191744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.421 [2024-10-17 19:35:04.194263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.203910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.204249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.681 [2024-10-17 19:35:04.204266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.681 [2024-10-17 19:35:04.204274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.681 [2024-10-17 19:35:04.204441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.681 [2024-10-17 19:35:04.204625] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.681 [2024-10-17 19:35:04.204635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.681 [2024-10-17 19:35:04.204641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.681 [2024-10-17 19:35:04.207162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.216686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.217077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.681 [2024-10-17 19:35:04.217093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.681 [2024-10-17 19:35:04.217101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.681 [2024-10-17 19:35:04.217264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.681 [2024-10-17 19:35:04.217423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.681 [2024-10-17 19:35:04.217432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.681 [2024-10-17 19:35:04.217438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.681 [2024-10-17 19:35:04.219965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.229635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.230057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.681 [2024-10-17 19:35:04.230073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.681 [2024-10-17 19:35:04.230080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.681 [2024-10-17 19:35:04.230239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.681 [2024-10-17 19:35:04.230398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.681 [2024-10-17 19:35:04.230408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.681 [2024-10-17 19:35:04.230414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.681 [2024-10-17 19:35:04.233088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.242720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.243149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.681 [2024-10-17 19:35:04.243166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.681 [2024-10-17 19:35:04.243174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.681 [2024-10-17 19:35:04.243347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.681 [2024-10-17 19:35:04.243520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.681 [2024-10-17 19:35:04.243529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.681 [2024-10-17 19:35:04.243536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.681 [2024-10-17 19:35:04.246269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.255538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.255877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.681 [2024-10-17 19:35:04.255893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.681 [2024-10-17 19:35:04.255900] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.681 [2024-10-17 19:35:04.256058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.681 [2024-10-17 19:35:04.256217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.681 [2024-10-17 19:35:04.256226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.681 [2024-10-17 19:35:04.256236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.681 [2024-10-17 19:35:04.258758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.681 [2024-10-17 19:35:04.268297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.681 [2024-10-17 19:35:04.268718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.268735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.268743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.268901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.269061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.269070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.269076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.271605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.281224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.281538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.281554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.281563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.281727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.281887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.281896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.281902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.284424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.293952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.294360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.294403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.294427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.294986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.295148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.295157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.295163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.297685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.306767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.307091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.307107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.307115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.307274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.307433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.307442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.307449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.309980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.319491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.319830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.319847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.319854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.320013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.320173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.320182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.320189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.322718] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.332271] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.332679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.332696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.332703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.332863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.333022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.333031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.333038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.335550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.345012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.345403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.345419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.345427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.345589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.345755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.345764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.345770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.348291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.357837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.358252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.358268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.358277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.358435] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.358595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.358610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.358617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.361134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.370650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.370975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.370994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.371001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.371160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.371319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.371329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.371335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.373857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.383462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.383772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.383789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.383797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.383954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.384113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.384122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.384132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.682 [2024-10-17 19:35:04.386652] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.682 [2024-10-17 19:35:04.396312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.682 [2024-10-17 19:35:04.396730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.682 [2024-10-17 19:35:04.396775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.682 [2024-10-17 19:35:04.396799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.682 [2024-10-17 19:35:04.396996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.682 [2024-10-17 19:35:04.397156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.682 [2024-10-17 19:35:04.397165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.682 [2024-10-17 19:35:04.397172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.399701] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.683 [2024-10-17 19:35:04.409068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.683 [2024-10-17 19:35:04.409487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.683 [2024-10-17 19:35:04.409523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.683 [2024-10-17 19:35:04.409549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.683 [2024-10-17 19:35:04.410128] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.683 [2024-10-17 19:35:04.410289] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.683 [2024-10-17 19:35:04.410298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.683 [2024-10-17 19:35:04.410305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.412920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.683 [2024-10-17 19:35:04.421845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.683 [2024-10-17 19:35:04.422255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.683 [2024-10-17 19:35:04.422271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.683 [2024-10-17 19:35:04.422278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.683 [2024-10-17 19:35:04.422438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.683 [2024-10-17 19:35:04.422597] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.683 [2024-10-17 19:35:04.422612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.683 [2024-10-17 19:35:04.422619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.425139] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.683 [2024-10-17 19:35:04.434556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.683 [2024-10-17 19:35:04.434971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.683 [2024-10-17 19:35:04.434990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.683 [2024-10-17 19:35:04.434998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.683 [2024-10-17 19:35:04.435157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.683 [2024-10-17 19:35:04.435317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.683 [2024-10-17 19:35:04.435326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.683 [2024-10-17 19:35:04.435332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.437861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.683 [2024-10-17 19:35:04.447383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.683 [2024-10-17 19:35:04.447751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.683 [2024-10-17 19:35:04.447767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.683 [2024-10-17 19:35:04.447775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.683 [2024-10-17 19:35:04.447933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.683 [2024-10-17 19:35:04.448093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.683 [2024-10-17 19:35:04.448102] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.683 [2024-10-17 19:35:04.448108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.450637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.683 [2024-10-17 19:35:04.460159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.683 [2024-10-17 19:35:04.460499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.683 [2024-10-17 19:35:04.460543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.683 [2024-10-17 19:35:04.460567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.683 [2024-10-17 19:35:04.461160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.683 [2024-10-17 19:35:04.461556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.683 [2024-10-17 19:35:04.461565] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.683 [2024-10-17 19:35:04.461572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.683 [2024-10-17 19:35:04.464251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 [2024-10-17 19:35:04.473104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.473490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.473506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.473514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.473678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.473841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.473850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.473856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.476369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 [2024-10-17 19:35:04.485891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.486245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.486261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.486268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.486427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.486586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.486595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.486609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.489289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 [2024-10-17 19:35:04.498795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.499218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.499234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.499242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.499410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.499577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.499587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.499594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.502264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 [2024-10-17 19:35:04.511705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.512025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.512041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.512049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.512216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.512385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.512395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.512401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.515074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2250142 Killed "${NVMF_APP[@]}" "$@" 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:40.945 [2024-10-17 19:35:04.524671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.525089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.525136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.525160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.525623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.525794] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.525803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.525810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.528477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=2251547 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 2251547 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2251547 ']' 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.945 19:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:40.945 [2024-10-17 19:35:04.537667] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.538043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.538062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.538071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.538239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.538409] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.538419] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.538427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.945 [2024-10-17 19:35:04.541329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.945 [2024-10-17 19:35:04.550611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.945 [2024-10-17 19:35:04.550971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.945 [2024-10-17 19:35:04.551017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.945 [2024-10-17 19:35:04.551043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.945 [2024-10-17 19:35:04.551510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.945 [2024-10-17 19:35:04.551685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.945 [2024-10-17 19:35:04.551695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.945 [2024-10-17 19:35:04.551702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.554344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.563416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.563760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.563777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.563785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.563953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.564121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.564130] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.564137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.566720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.576244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.576672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.576718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.576743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.577322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.577558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.577567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.577573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.580138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.580442] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:40.946 [2024-10-17 19:35:04.580487] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.946 [2024-10-17 19:35:04.589075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.589515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.589561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.589586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.590186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.590611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.590622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.590628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.593148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.602078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.602527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.602571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.602595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.603195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.603752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.603762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.603769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.606435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.615061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.615406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.615423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.615431] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.615604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.615773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.615782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.615789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.618458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.628114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.628384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.628400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.628411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.628579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.628753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.628762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.628769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.631437] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.641100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.641383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.641400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.641408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.641576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.641748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.641759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.641765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.644430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.654030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.654324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.654341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.654350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.654517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.654691] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.654701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.654708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.657371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.661934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.946 [2024-10-17 19:35:04.666976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.667403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.667419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.667428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.667598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.667778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.667788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.667794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.670464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.679909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.680311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.680328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.946 [2024-10-17 19:35:04.680336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.946 [2024-10-17 19:35:04.680504] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.946 [2024-10-17 19:35:04.680676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.946 [2024-10-17 19:35:04.680687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.946 [2024-10-17 19:35:04.680694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.946 [2024-10-17 19:35:04.683351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.946 [2024-10-17 19:35:04.692778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.946 [2024-10-17 19:35:04.693177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.946 [2024-10-17 19:35:04.693194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.947 [2024-10-17 19:35:04.693203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.947 [2024-10-17 19:35:04.693370] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.947 [2024-10-17 19:35:04.693539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.947 [2024-10-17 19:35:04.693549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.947 [2024-10-17 19:35:04.693556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.947 [2024-10-17 19:35:04.696217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.947 [2024-10-17 19:35:04.702185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.947 [2024-10-17 19:35:04.702214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.947 [2024-10-17 19:35:04.702220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.947 [2024-10-17 19:35:04.702226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.947 [2024-10-17 19:35:04.702231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.947 [2024-10-17 19:35:04.703621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.947 [2024-10-17 19:35:04.703689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.947 [2024-10-17 19:35:04.703690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.947 [2024-10-17 19:35:04.705707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.947 [2024-10-17 19:35:04.706072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.947 [2024-10-17 19:35:04.706094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.947 [2024-10-17 19:35:04.706102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.947 [2024-10-17 19:35:04.706275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.947 [2024-10-17 19:35:04.706451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.947 [2024-10-17 19:35:04.706462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.947 [2024-10-17 19:35:04.706469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.947 [2024-10-17 19:35:04.709217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.947 [2024-10-17 19:35:04.718772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:40.947 [2024-10-17 19:35:04.719148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:40.947 [2024-10-17 19:35:04.719168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:40.947 [2024-10-17 19:35:04.719176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:40.947 [2024-10-17 19:35:04.719349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:40.947 [2024-10-17 19:35:04.719524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:40.947 [2024-10-17 19:35:04.719534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:40.947 [2024-10-17 19:35:04.719541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:40.947 [2024-10-17 19:35:04.722288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.731858] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.732292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.732312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.732321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.732495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.732673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.732683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.732690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.735440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.744836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.745257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.745276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.745286] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.745458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.745644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.745655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.745663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.748406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.757811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.758185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.758205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.758214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.758387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.758563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.758572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.758579] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.761328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.770917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.771353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.771370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.771379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.771551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.771729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.771739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.771746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.774487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.783878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.784287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.784305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.784312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.784485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.784664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.784674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.784681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.787423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.796975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.797406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.797423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.797432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.797609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.797783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.797793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.797800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.800539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.809914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.810321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.810338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.810346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.810518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.810696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.810706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.810712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.813452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.822992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.823343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.823360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.823369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.208 [2024-10-17 19:35:04.823541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.208 [2024-10-17 19:35:04.823719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.208 [2024-10-17 19:35:04.823729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.208 [2024-10-17 19:35:04.823736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.208 [2024-10-17 19:35:04.826476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.208 [2024-10-17 19:35:04.836021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.208 [2024-10-17 19:35:04.836449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.208 [2024-10-17 19:35:04.836465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.208 [2024-10-17 19:35:04.836476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.836655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.836829] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.836838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.836845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.839585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.848965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.849394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.849410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.849418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.849590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.849767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.849778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.849784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.852523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.861908] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.862350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.862367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.862375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.862547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.862725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.862735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.862743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.865494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.874887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.875295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.875312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.875320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.875492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.875670] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.875683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.875690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.878429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.887960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.888377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.888395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.888402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.888575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.888753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.888764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.888770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.891512] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.901052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.901457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.901475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.901483] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.901659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.901834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.901844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.901851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.904581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.914120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.914548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.914565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.914574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.914750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.914924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.914934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.914941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.917678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.927064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.927499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.927516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.927524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.927700] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.927873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.927883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.927890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.930633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.940177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.940607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.940624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.940633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.940805] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.940979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.940989] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.940996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.943736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.953275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.953703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.953721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.953729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.953901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.954076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.954085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.954092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.956836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 [2024-10-17 19:35:04.966230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.966654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.209 [2024-10-17 19:35:04.966672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.209 [2024-10-17 19:35:04.966680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.209 [2024-10-17 19:35:04.966857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.209 [2024-10-17 19:35:04.967030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.209 [2024-10-17 19:35:04.967040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.209 [2024-10-17 19:35:04.967046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.209 [2024-10-17 19:35:04.969796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.209 4893.67 IOPS, 19.12 MiB/s [2024-10-17T17:35:04.993Z] [2024-10-17 19:35:04.980433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.209 [2024-10-17 19:35:04.980791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.210 [2024-10-17 19:35:04.980808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.210 [2024-10-17 19:35:04.980816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.210 [2024-10-17 19:35:04.980988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.210 [2024-10-17 19:35:04.981160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.210 [2024-10-17 19:35:04.981170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.210 [2024-10-17 19:35:04.981176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.210 [2024-10-17 19:35:04.983922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.470 [2024-10-17 19:35:04.993466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.470 [2024-10-17 19:35:04.993755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.470 [2024-10-17 19:35:04.993772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.470 [2024-10-17 19:35:04.993780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.470 [2024-10-17 19:35:04.993952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.470 [2024-10-17 19:35:04.994127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.470 [2024-10-17 19:35:04.994137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.470 [2024-10-17 19:35:04.994144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.470 [2024-10-17 19:35:04.996896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.470 [2024-10-17 19:35:05.006432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.470 [2024-10-17 19:35:05.006855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.470 [2024-10-17 19:35:05.006874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.470 [2024-10-17 19:35:05.006884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.470 [2024-10-17 19:35:05.007058] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.470 [2024-10-17 19:35:05.007231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.470 [2024-10-17 19:35:05.007242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.470 [2024-10-17 19:35:05.007253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.470 [2024-10-17 19:35:05.009999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.470 [2024-10-17 19:35:05.019389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.470 [2024-10-17 19:35:05.019816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.470 [2024-10-17 19:35:05.019834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.470 [2024-10-17 19:35:05.019842] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.470 [2024-10-17 19:35:05.020015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.470 [2024-10-17 19:35:05.020189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.020198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.020205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.022944] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.032484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.032899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.032916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.032924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.033095] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.033269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.033279] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.033286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.036023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.045550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.045957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.045975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.045983] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.046155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.046328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.046338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.046345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.049088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.058624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.059049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.059066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.059073] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.059246] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.059420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.059430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.059437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.062178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.071571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.072004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.072021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.072029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.072201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.072375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.072385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.072391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.075134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.084522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.084863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.084880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.084888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.085060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.085234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.085244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.085251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.087999] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.097544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.097952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.097969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.097977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.098154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.098328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.098338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.098345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.101086] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.110632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.111060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.111076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.111084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.111256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.111430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.111440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.111447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.114193] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.123574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.124005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.124023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.124031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.124203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.124378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.124387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.124394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.127138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.136518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.136946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.136964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.136971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.137144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.137318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.137328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.137334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.140081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.149465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.149820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.149837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.471 [2024-10-17 19:35:05.149846] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.471 [2024-10-17 19:35:05.150017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.471 [2024-10-17 19:35:05.150191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.471 [2024-10-17 19:35:05.150202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.471 [2024-10-17 19:35:05.150209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.471 [2024-10-17 19:35:05.152951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.471 [2024-10-17 19:35:05.162501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.471 [2024-10-17 19:35:05.162886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.471 [2024-10-17 19:35:05.162903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.162912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.163085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.163257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.163268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.163275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.166027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.175591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.175939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.175956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.175965] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.176138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.176312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.176322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.176329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.179073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.188628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.188921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.188942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.188949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.189122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.189295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.189305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.189311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.192058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.201631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.201926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.201943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.201951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.202123] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.202297] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.202307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.202313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.205063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.214682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.214972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.214989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.214997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.215169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.215343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.215354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.215360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.218106] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.227656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.227991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.228008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.228016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.228189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.228366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.228376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.228384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.231133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.240692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.241057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.241075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.241082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.472 [2024-10-17 19:35:05.241255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.472 [2024-10-17 19:35:05.241429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.472 [2024-10-17 19:35:05.241439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.472 [2024-10-17 19:35:05.241446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.472 [2024-10-17 19:35:05.244196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.472 [2024-10-17 19:35:05.253747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.472 [2024-10-17 19:35:05.254105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.472 [2024-10-17 19:35:05.254124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.472 [2024-10-17 19:35:05.254132] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.733 [2024-10-17 19:35:05.254305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.733 [2024-10-17 19:35:05.254479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.733 [2024-10-17 19:35:05.254491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.733 [2024-10-17 19:35:05.254498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.733 [2024-10-17 19:35:05.257242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.733 [2024-10-17 19:35:05.266826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.733 [2024-10-17 19:35:05.267165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.733 [2024-10-17 19:35:05.267181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.733 [2024-10-17 19:35:05.267191] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.733 [2024-10-17 19:35:05.267365] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.733 [2024-10-17 19:35:05.267540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.733 [2024-10-17 19:35:05.267550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.733 [2024-10-17 19:35:05.267558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.733 [2024-10-17 19:35:05.270313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.733 [2024-10-17 19:35:05.279886] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.733 [2024-10-17 19:35:05.280274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.733 [2024-10-17 19:35:05.280292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.733 [2024-10-17 19:35:05.280300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.733 [2024-10-17 19:35:05.280471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.733 [2024-10-17 19:35:05.280650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.733 [2024-10-17 19:35:05.280661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.733 [2024-10-17 19:35:05.280668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.733 [2024-10-17 19:35:05.283412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.733 [2024-10-17 19:35:05.292968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.733 [2024-10-17 19:35:05.293299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.733 [2024-10-17 19:35:05.293316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.733 [2024-10-17 19:35:05.293324] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.733 [2024-10-17 19:35:05.293496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.733 [2024-10-17 19:35:05.293674] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.733 [2024-10-17 19:35:05.293684] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.733 [2024-10-17 19:35:05.293691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.296433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.305998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.306332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.306350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.306358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.306530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.306709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.306719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.306726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.309465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.319021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.319314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.319331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.319342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.319514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.319692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.319702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.319709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.322444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.332002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.332339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.332356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.332365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.332538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.332717] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.332727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.332734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.335479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.345027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.345387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.345405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.345413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.345585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.345765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.345776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.345782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.348524] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.358082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.358442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.358460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.358468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.358645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.358819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.358832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.358839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.361581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.371153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.371435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.371456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.371465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.371641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.371816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.371826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.371833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.374563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.384120] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.384457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.384475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.384485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.384664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.384840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.384850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.384857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.387599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.397147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.397489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.397507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.397515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.397692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.397866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.397876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.397882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.400629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.410178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.410633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.410651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.410659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.410831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.411004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.411014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.411021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.413772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 [2024-10-17 19:35:05.423170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.423624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.423643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.734 [2024-10-17 19:35:05.423651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.734 [2024-10-17 19:35:05.423823] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.734 [2024-10-17 19:35:05.423998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.734 [2024-10-17 19:35:05.424007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.734 [2024-10-17 19:35:05.424014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.734 [2024-10-17 19:35:05.426764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.734 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:41.734 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:41.734 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:41.734 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:41.734 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.734 [2024-10-17 19:35:05.436154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.734 [2024-10-17 19:35:05.436578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.734 [2024-10-17 19:35:05.436595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.436607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.436780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.436956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.436967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.436973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 [2024-10-17 19:35:05.439720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 [2024-10-17 19:35:05.449121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.449462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.449480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.449488] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.449665] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.449840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.449850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.449857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 [2024-10-17 19:35:05.452605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 [2024-10-17 19:35:05.462164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.462564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.462584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.462593] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.462769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.462943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.462953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.462959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 [2024-10-17 19:35:05.465713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.735 [2024-10-17 19:35:05.473157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.735 [2024-10-17 19:35:05.475112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.475477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.475495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.475503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.475680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.475854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.475864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.475871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.735 [2024-10-17 19:35:05.478622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.735 [2024-10-17 19:35:05.488181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.488614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.488632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.488641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.488812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.488986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.488996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.489002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 [2024-10-17 19:35:05.491741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 [2024-10-17 19:35:05.501167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.501570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.501587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.501595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.501775] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.501949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.501959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.501966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.735 [2024-10-17 19:35:05.504714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.735 Malloc0 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.735 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.735 [2024-10-17 19:35:05.514155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.735 [2024-10-17 19:35:05.514491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.735 [2024-10-17 19:35:05.514509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.735 [2024-10-17 19:35:05.514518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.735 [2024-10-17 19:35:05.514695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.735 [2024-10-17 19:35:05.514870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.735 [2024-10-17 19:35:05.514884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.735 [2024-10-17 19:35:05.514891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.994 [2024-10-17 19:35:05.517824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.994 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.994 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:41.994 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.994 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.994 [2024-10-17 19:35:05.527229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.994 [2024-10-17 19:35:05.527565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:41.994 [2024-10-17 19:35:05.527584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d9600 with addr=10.0.0.2, port=4420 00:27:41.994 [2024-10-17 19:35:05.527592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d9600 is same with the state(6) to be set 00:27:41.995 [2024-10-17 19:35:05.527769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d9600 (9): Bad file descriptor 00:27:41.995 [2024-10-17 19:35:05.527943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:41.995 [2024-10-17 19:35:05.527953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:41.995 [2024-10-17 19:35:05.527960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:41.995 [2024-10-17 19:35:05.530705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:41.995 [2024-10-17 19:35:05.535010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.995 19:35:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2250615 00:27:41.995 [2024-10-17 19:35:05.540257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:41.995 [2024-10-17 19:35:05.577701] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:43.638 4854.71 IOPS, 18.96 MiB/s [2024-10-17T17:35:07.989Z] 5676.25 IOPS, 22.17 MiB/s [2024-10-17T17:35:09.365Z] 6304.67 IOPS, 24.63 MiB/s [2024-10-17T17:35:10.302Z] 6827.60 IOPS, 26.67 MiB/s [2024-10-17T17:35:11.238Z] 7235.09 IOPS, 28.26 MiB/s [2024-10-17T17:35:12.177Z] 7589.75 IOPS, 29.65 MiB/s [2024-10-17T17:35:13.113Z] 7886.54 IOPS, 30.81 MiB/s [2024-10-17T17:35:14.050Z] 8145.93 IOPS, 31.82 MiB/s [2024-10-17T17:35:14.050Z] 8357.53 IOPS, 32.65 MiB/s 00:27:50.266 Latency(us) 00:27:50.266 [2024-10-17T17:35:14.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:50.266 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:50.266 Verification LBA range: start 0x0 length 0x4000 00:27:50.266 Nvme1n1 : 15.01 8360.00 32.66 13311.44 0.00 5887.46 436.91 14542.75 00:27:50.266 [2024-10-17T17:35:14.050Z] =================================================================================================================== 00:27:50.266 [2024-10-17T17:35:14.050Z] Total : 8360.00 32.66 13311.44 0.00 5887.46 436.91 14542.75 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:50.526 rmmod nvme_tcp 00:27:50.526 rmmod nvme_fabrics 00:27:50.526 rmmod nvme_keyring 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 2251547 ']' 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 2251547 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2251547 ']' 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2251547 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2251547 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2251547' 00:27:50.526 killing process with pid 2251547 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2251547 00:27:50.526 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2251547 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-save 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # iptables-restore 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:50.785 19:35:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:53.323 00:27:53.323 real 0m26.158s 00:27:53.323 user 1m1.298s 00:27:53.323 sys 0m6.764s 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:53.323 ************************************ 00:27:53.323 END TEST nvmf_bdevperf 00:27:53.323 ************************************ 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.323 ************************************ 00:27:53.323 START TEST nvmf_target_disconnect 00:27:53.323 ************************************ 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:53.323 * Looking for test storage... 00:27:53.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.323 --rc genhtml_branch_coverage=1 00:27:53.323 --rc genhtml_function_coverage=1 00:27:53.323 --rc genhtml_legend=1 00:27:53.323 --rc geninfo_all_blocks=1 00:27:53.323 --rc geninfo_unexecuted_blocks=1 00:27:53.323 00:27:53.323 ' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.323 --rc genhtml_branch_coverage=1 00:27:53.323 --rc genhtml_function_coverage=1 00:27:53.323 --rc genhtml_legend=1 00:27:53.323 --rc geninfo_all_blocks=1 00:27:53.323 --rc geninfo_unexecuted_blocks=1 00:27:53.323 00:27:53.323 ' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.323 --rc genhtml_branch_coverage=1 00:27:53.323 --rc genhtml_function_coverage=1 00:27:53.323 --rc genhtml_legend=1 00:27:53.323 --rc geninfo_all_blocks=1 00:27:53.323 --rc geninfo_unexecuted_blocks=1 00:27:53.323 00:27:53.323 ' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.323 --rc genhtml_branch_coverage=1 00:27:53.323 --rc genhtml_function_coverage=1 00:27:53.323 --rc genhtml_legend=1 00:27:53.323 --rc geninfo_all_blocks=1 00:27:53.323 --rc geninfo_unexecuted_blocks=1 00:27:53.323 00:27:53.323 ' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.323 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.324 19:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.897 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:59.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:59.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:59.898 Found net devices under 0000:86:00.0: cvl_0_0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:59.898 Found net devices under 0000:86:00.1: cvl_0_1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:27:59.898 00:27:59.898 --- 10.0.0.2 ping statistics --- 00:27:59.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.898 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:27:59.898 00:27:59.898 --- 10.0.0.1 ping statistics --- 00:27:59.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.898 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.898 ************************************ 00:27:59.898 START TEST nvmf_target_disconnect_tc1 00:27:59.898 ************************************ 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:59.898 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:59.898 [2024-10-17 19:35:22.883766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.899 [2024-10-17 19:35:22.883885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa22b80 with addr=10.0.0.2, port=4420 00:27:59.899 [2024-10-17 19:35:22.883949] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:59.899 [2024-10-17 19:35:22.883984] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:59.899 [2024-10-17 19:35:22.884006] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:59.899 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:59.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:59.899 Initializing NVMe Controllers 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:59.899 00:27:59.899 real 0m0.122s 00:27:59.899 user 0m0.044s 00:27:59.899 sys 0m0.076s 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 ************************************ 00:27:59.899 END TEST nvmf_target_disconnect_tc1 00:27:59.899 ************************************ 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 ************************************ 00:27:59.899 START TEST nvmf_target_disconnect_tc2 00:27:59.899 ************************************ 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2256716 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2256716 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2256716 ']' 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.899 19:35:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 [2024-10-17 19:35:23.026834] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:27:59.899 [2024-10-17 19:35:23.026878] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.899 [2024-10-17 19:35:23.104998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.899 [2024-10-17 19:35:23.145927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.899 [2024-10-17 19:35:23.145969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.899 [2024-10-17 19:35:23.145977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.899 [2024-10-17 19:35:23.145982] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.899 [2024-10-17 19:35:23.145987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.899 [2024-10-17 19:35:23.147563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:59.899 [2024-10-17 19:35:23.147662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:59.899 [2024-10-17 19:35:23.147745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:59.899 [2024-10-17 19:35:23.147746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 Malloc0 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 [2024-10-17 19:35:23.315576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 [2024-10-17 19:35:23.344637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2256742 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:59.899 19:35:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:01.823 19:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2256716 00:28:01.823 19:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 [2024-10-17 19:35:25.373016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 [2024-10-17 19:35:25.373212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Write completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.823 starting I/O failed 00:28:01.823 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 [2024-10-17 19:35:25.373406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Write completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 Read completed with error (sct=0, sc=8) 00:28:01.824 starting I/O failed 00:28:01.824 [2024-10-17 19:35:25.373605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:01.824 [2024-10-17 19:35:25.373809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.373831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.374008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.374060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.374269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.374305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.374545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.374578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.374738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.374771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.374951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.374984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.375889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.375922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.376055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.376088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.376265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.824 [2024-10-17 19:35:25.376299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.824 qpair failed and we were unable to recover it. 00:28:01.824 [2024-10-17 19:35:25.376439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.376473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.376616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.376653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.376772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.376805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.376999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.377880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.377997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.378890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.378923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.379146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.379300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.379471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.379705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.379866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.379982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.380820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.380831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.381585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.381618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.381717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.381743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.381894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.381907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.381973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.381984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.382048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.825 [2024-10-17 19:35:25.382059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.825 qpair failed and we were unable to recover it. 00:28:01.825 [2024-10-17 19:35:25.382124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.382901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.382933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.383894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.383927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.384936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.384958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.385886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.385968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.386930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.826 [2024-10-17 19:35:25.386963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.826 qpair failed and we were unable to recover it. 00:28:01.826 [2024-10-17 19:35:25.387076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.387876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.387969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.388010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.388205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.388238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.388415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.388447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.388644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.388679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.388852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.388885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.389969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.389992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.390891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.390924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.391070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.391194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.391321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.391618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.391851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.391967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.392104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.392314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.392521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.392724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.392869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.392902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.827 qpair failed and we were unable to recover it. 00:28:01.827 [2024-10-17 19:35:25.393017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.827 [2024-10-17 19:35:25.393050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.393160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.393191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.393310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.393333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.393496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.393528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.393652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.393685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.393872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.393904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.394086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.394119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.394380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.394403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.394578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.394607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.394715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.394737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.394891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.394914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.395910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.395933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.396098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.396121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.396226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.396253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.396421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.396444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.396614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.396638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.396816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.396840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.397020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.397044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.397261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.397283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.397368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.397390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.397545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.397568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.397749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.397773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.828 [2024-10-17 19:35:25.398012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.828 [2024-10-17 19:35:25.398034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.828 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.398207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.398383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.398509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.398631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.398811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.398984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.399957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.399978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.400132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.400154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.400308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.400331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.400495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.400517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.400679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.400703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.400877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.400904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.401900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.401983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.402865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.402948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.403201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.403226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.403329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.403351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.829 [2024-10-17 19:35:25.403511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.829 [2024-10-17 19:35:25.403533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.829 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.403620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.403643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.403884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.403907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.404823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.404989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.405164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.405285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.405574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.405763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.405953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.405975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.406894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.406992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.407261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.407462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.407593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.407734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.407933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.407956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.408966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.408988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.409157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.409180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.409359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.830 [2024-10-17 19:35:25.409382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.830 qpair failed and we were unable to recover it. 00:28:01.830 [2024-10-17 19:35:25.409634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.409678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.409931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.409964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.410824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.410984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.411192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.411366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.411559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.411758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.411964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.411986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.412945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.412968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.413900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.413923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.414084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.414107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.414260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.414287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.414506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.414530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.414630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.414653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.414831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.414854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.415024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.415047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.415222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.415245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.415328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.415351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.415512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.831 [2024-10-17 19:35:25.415535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.831 qpair failed and we were unable to recover it. 00:28:01.831 [2024-10-17 19:35:25.415642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.415666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.415754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.415777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.415861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.415883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.416947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.416970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.417833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.417994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.418016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.418116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.418138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.418381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.418403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.418644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.418668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.418784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.418811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.419896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.419918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.420875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.420898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.421074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.421114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.421291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.421317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.421405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.421427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.421592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.421621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.832 [2024-10-17 19:35:25.421855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.832 [2024-10-17 19:35:25.421879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.832 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.421979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.422158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.422353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.422591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.422733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.422907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.422930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.423172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.423195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.423347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.423369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.423520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.423547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.423706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.423729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.423891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.423914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.424086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.424110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.424266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.424289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.424441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.424464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.424617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.424639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.424816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.424840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.425920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.425943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.426968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.426991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.427211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.427233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.427409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.427431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.427628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.427652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.427758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.833 [2024-10-17 19:35:25.427781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.833 qpair failed and we were unable to recover it. 00:28:01.833 [2024-10-17 19:35:25.427942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.427965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.428135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.428172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.428321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.428354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.428551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.428576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.428682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.428706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.428821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.428844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.429952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.429975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.430225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.430247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.430364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.430386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.430489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.430516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.430627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.430650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.430846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.430869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.431932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.431954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.432108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.432131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.432290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.432312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.432536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.432558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.432812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.432836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.432925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.432948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.433170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.433193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.433439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.433462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.433630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.433654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.433754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.433776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.434840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.434862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.435050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.435072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.435297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.834 [2024-10-17 19:35:25.435320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.834 qpair failed and we were unable to recover it. 00:28:01.834 [2024-10-17 19:35:25.435542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.435565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.435745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.435769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.435932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.435954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.436067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.436089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.436199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.436222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.436380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.436402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.436633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.436656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.436901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.436924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.437100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.437123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.437308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.437331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.437427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.437451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.437666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.437690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.437840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.437863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.438033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.438055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.438211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.438235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.438390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.438414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.438588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.438616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.438861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.438884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.439052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.439075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.439298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.439321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.439491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.439514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.439683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.439707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.439880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.439903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.440877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.440899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.441973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.441996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.442237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.442260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.442359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.442381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.442621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.442645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.442818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.442842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.442932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.835 [2024-10-17 19:35:25.442954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.835 qpair failed and we were unable to recover it. 00:28:01.835 [2024-10-17 19:35:25.443129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.443317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.443452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.443637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.443759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.443944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.443966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.444901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.444923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.445923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.445945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.446109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.446131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.446303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.446326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.446438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.446461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.446557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.446580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.446828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.446852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.447017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.447040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.447265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.447288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.447455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.447477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.447705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.447728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.447897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.447920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.448048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.448247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.448378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.448629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.448885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.448985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.449007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.449193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.449216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.449372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.449396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.449586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.449616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.449840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.449863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.450105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.450128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.450294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.450317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.836 qpair failed and we were unable to recover it. 00:28:01.836 [2024-10-17 19:35:25.450538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.836 [2024-10-17 19:35:25.450561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.450668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.450695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.450844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.450867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.451024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.451047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.451140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.451163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.451340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.451363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.451530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.451553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.451806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.451830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.452043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.452065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.452217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.452240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.452430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.452452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.452692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.452716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.452939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.452962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.453063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.453086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.453345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.453368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.453522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.453546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.453768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.453792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.453894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.453916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.454875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.454898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.455051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.455074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.455164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.455187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.455404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.455427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.455598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.455628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.455855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.455878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.837 [2024-10-17 19:35:25.456937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.837 [2024-10-17 19:35:25.456960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.837 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.457200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.457224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.457378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.457401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.457552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.457575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.457800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.457824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.457973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.457995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.458166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.458188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.458441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.458467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.458634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.458658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.458828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.458851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.459002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.459025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.459268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.459292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.459403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.459426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.459531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.459554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.459771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.459795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.460875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.460897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.461073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.461248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.461435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.461653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.461835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.461997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.462966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.462990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.463880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.463903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.464076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.464099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.464184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.838 [2024-10-17 19:35:25.464205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.838 qpair failed and we were unable to recover it. 00:28:01.838 [2024-10-17 19:35:25.464319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.464343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.464440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.464463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.464632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.464656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.464872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.464895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.465971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.465993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.466869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.466892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.467084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.467256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.467440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.467646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.467824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.467979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.468096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.468285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.468475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.468720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.468846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.468868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.469030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.469053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.469280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.469303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.469451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.469473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.469716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.469739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.469911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.469933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.470140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.470285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.470391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.470587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.470837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.470994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.471016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.471165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.471188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.471410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.839 [2024-10-17 19:35:25.471433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.839 qpair failed and we were unable to recover it. 00:28:01.839 [2024-10-17 19:35:25.471697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.471721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.471937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.471960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.472869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.472892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.473976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.473999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.474132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.474271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.474405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.474646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.474760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.474978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.475050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.475264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.475301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.475430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.475454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.475695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.475719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.475802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.475824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.476926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.476947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.477922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.477944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.478163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.840 [2024-10-17 19:35:25.478187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.840 qpair failed and we were unable to recover it. 00:28:01.840 [2024-10-17 19:35:25.478348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.478370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.478532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.478555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.478718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.478742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.478825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.478847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.478942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.478965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.479137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.479159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.479399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.479430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.479517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.479538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.479703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.479727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.479895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.479918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.480965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.480987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.481973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.481994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.482192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.482215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.482445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.482468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.482641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.482665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.482833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.482857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.483953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.483976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.484080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.484103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.484262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.484285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.484470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.484493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.484656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.484680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.484898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.841 [2024-10-17 19:35:25.484921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.841 qpair failed and we were unable to recover it. 00:28:01.841 [2024-10-17 19:35:25.485099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.485121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.485286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.485308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.485526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.485548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.485710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.485734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.485954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.485976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.486134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.486157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.486395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.486418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.486573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.486596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.486825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.486852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.487880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.487903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.488167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.488190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.488341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.488363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.488464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.488487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.488652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.488676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.488773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.488795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.489816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.489839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.490890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.490913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.491019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.491041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.491195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.491219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.491447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.491520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.491775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.491815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.842 qpair failed and we were unable to recover it. 00:28:01.842 [2024-10-17 19:35:25.492011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.842 [2024-10-17 19:35:25.492044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.492237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.492423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.492541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.492672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.492845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.492995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.493777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.493997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.494020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.494266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.494288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.494374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.494397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.494558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.494581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.494698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.494736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.494989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.495021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.495162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.495194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.495426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.495450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.495613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.495636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.495736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.495759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.495980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.496171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.496288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.496406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.496671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.496883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.496906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.497950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.497973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.498085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.498107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.498271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.498292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.498444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.498467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.498688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.498740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.498911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.498936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.843 [2024-10-17 19:35:25.499093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.843 [2024-10-17 19:35:25.499116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.843 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.499268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.499291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.499442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.499465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.499577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.499611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.499704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.499727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.499945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.499968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.500955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.500978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.501956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.501979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.502864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.502886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.503844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.503996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.504137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.504337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.504525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.504660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.504915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.504938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.505122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.505144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.505316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.505339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.505530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.505554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.505660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.505684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.844 [2024-10-17 19:35:25.505846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.844 [2024-10-17 19:35:25.505869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.844 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.505961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.505984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.506160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.506184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.506344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.506367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.506533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.506557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.506676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.506699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.506855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.506877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.507030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.507053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.507292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.507314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.507413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.507436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.507661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.507685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.507873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.507900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.508898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.508921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.509089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.509112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.509262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.509285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.509452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.509476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.509661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.509684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.509963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.509986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.510104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.510127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.510226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.510249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.510471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.510495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.510650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.510674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.510899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.510922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.511949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.511972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.512919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.512942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.513054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.845 [2024-10-17 19:35:25.513077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.845 qpair failed and we were unable to recover it. 00:28:01.845 [2024-10-17 19:35:25.513167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.513188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.513358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.513381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.513564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.513587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.513692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.513716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.513816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.513838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.514048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.514240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.514426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.514687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.514885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.514995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.515018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.515222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.515249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.515399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.515421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.515643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.515678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.515793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.515816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.516035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.516059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.516160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.516182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.516355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.516377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.516477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.516500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.516745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.516769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.517110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.517136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.517387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.517412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.517582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.517609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.517771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.517794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.517990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.518013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.518174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.518197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.518421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.518444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.518628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.518651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.518806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.518829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.519097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.519342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.519519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.519660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.519833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.519984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.520007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.520108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.520132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.520383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.846 [2024-10-17 19:35:25.520406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.846 qpair failed and we were unable to recover it. 00:28:01.846 [2024-10-17 19:35:25.520570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.520593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.520697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.520720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.520885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.520908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.521775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.521798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.522851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.522874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.523036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.523060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.523218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.523241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.523342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.523365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.523582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.523614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.523783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.523806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.524913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.524935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.525904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.525927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.526024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.526048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.526255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.526277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.526437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.526460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.526623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.526647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.526801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.526824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.527048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.527070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.527239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.527262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.527414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.847 [2024-10-17 19:35:25.527437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.847 qpair failed and we were unable to recover it. 00:28:01.847 [2024-10-17 19:35:25.527596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.527643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.527735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.527758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.527977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.528925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.528948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.529129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.529152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.529250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.529272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.529505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.529528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.529771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.529795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.529961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.529984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.530189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.530212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.530321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.530344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.530454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.530477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.530644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.530669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.530900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.530924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.531106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.531300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.531430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.531632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.531883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.531983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.532944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.532967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.533922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.533945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.534163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.534185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.534367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.534389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.848 [2024-10-17 19:35:25.534506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.848 [2024-10-17 19:35:25.534529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.848 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.534694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.534719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.534831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.534854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.534951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.534974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.535190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.535213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.535325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.535348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.535497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.535519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.535736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.535759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.536774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.536795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.537929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.537952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.538114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.538137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.538243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.538266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.538419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.538442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.538615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.538639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.538803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.538825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.539047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.539069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.539237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.539260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.539422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.539444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.539636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.539660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.539888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.539910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.540085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.540112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.540337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.540359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.540457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.540479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.540588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.540619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.540836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.540859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.541830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.541854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.849 qpair failed and we were unable to recover it. 00:28:01.849 [2024-10-17 19:35:25.542075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.849 [2024-10-17 19:35:25.542098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.542291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.542314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.542475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.542498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.542613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.542636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.542743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.542766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.542986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.543163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.543367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.543500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.543711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.543829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.543852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.544943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.544966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.545911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.545934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.546085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.546108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.546286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.546309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.546556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.546580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.546804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.546827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.546926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.546948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.547134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.547156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.547310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.547336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.547509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.547532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.547700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.547723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.547831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.547854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.548016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.548039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.548191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.548214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.850 [2024-10-17 19:35:25.548404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.850 [2024-10-17 19:35:25.548427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.850 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.548669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.548692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.548785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.548808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.549874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.549897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.550901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.550923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.551966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.551992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.552887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.552910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.553073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.553095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.553179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.553200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.553300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.553323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.553528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.553580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.553771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.553798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.554941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.554964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.555182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.851 [2024-10-17 19:35:25.555205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.851 qpair failed and we were unable to recover it. 00:28:01.851 [2024-10-17 19:35:25.555423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.555446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.555704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.555727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.555834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.555857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.556941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.556963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.557890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.557912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.558014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.558037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.558209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.558231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.558347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.558370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.558544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.558566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.558825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.558849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.559910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.559933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.560911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.560934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.561874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.561897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.852 [2024-10-17 19:35:25.562153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.852 [2024-10-17 19:35:25.562176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.852 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.562341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.562363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.562518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.562541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.562717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.562741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.562902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.562923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.563839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.563862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.564033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.564056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.564228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.564250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.564428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.564451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.564699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.564723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.564902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.564924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.565938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.565962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.566146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.566168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.566355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.566378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.566495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.566517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.566765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.566788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.567850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.567873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.568977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.568999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.569098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.569121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.569215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.569238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.853 [2024-10-17 19:35:25.569401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.853 [2024-10-17 19:35:25.569424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.853 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.569652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.569676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.569793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.569816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.569922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.569945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.570217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.570239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.570404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.570427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.570514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.570536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.570754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.570783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.570898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.570920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.571161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.571183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.571354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.571376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.571550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.571573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.571759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.571785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.571890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.571912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.572077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.572100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.572261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.572284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.572504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.572526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.572729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.572753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.572859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.572881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.573938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.573961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.574134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.574156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.574344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.574366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.574453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.574475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.574639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.574662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.574834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.574856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.575038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.575060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.575244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.575266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.575365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.575387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.575543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.575565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.575783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.575835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.576050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.576075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.576189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.576212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.854 qpair failed and we were unable to recover it. 00:28:01.854 [2024-10-17 19:35:25.576389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.854 [2024-10-17 19:35:25.576412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.576565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.576589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.576687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.576709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.576801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.576825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.577910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.577932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.578086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.578108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.578363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.578386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.578490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.578512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.578682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.578706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.578869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.578891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.579897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.579919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.580962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.580984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.581941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.581963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.582966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.582989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.855 qpair failed and we were unable to recover it. 00:28:01.855 [2024-10-17 19:35:25.583091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.855 [2024-10-17 19:35:25.583113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.583286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.583309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.583405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.583428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.583618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.583642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.583809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.583831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.583939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.583962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.584847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.584874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.585919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.585941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.586048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.586071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.586219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.586242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.586411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.586433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.586586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.586615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.586846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.586869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.587041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.587171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.587421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.587557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.587740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.587998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.588242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.588435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.588559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.588749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.588927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:01.856 [2024-10-17 19:35:25.588950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:01.856 qpair failed and we were unable to recover it. 00:28:01.856 [2024-10-17 19:35:25.589118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.589141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.589416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.589443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.589620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.589644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.589751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.589773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.589873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.589896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.590132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.590155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.590309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.590331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.153 qpair failed and we were unable to recover it. 00:28:02.153 [2024-10-17 19:35:25.590516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.153 [2024-10-17 19:35:25.590538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.590638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.590662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.590747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.590769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.590878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.590900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.591865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.591888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.592802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.592990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.593174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.593441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.593557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.593695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.593888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.593910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.594929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.594951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.595895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.595917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.596123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.596338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.596450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.596642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.596820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.596992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.154 [2024-10-17 19:35:25.597015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.154 qpair failed and we were unable to recover it. 00:28:02.154 [2024-10-17 19:35:25.597231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.597254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.597419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.597442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.597544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.597565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.597742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.597765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.597935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.597957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.598865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.598887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.599971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.599993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.600167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.600189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.600312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.600334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.600490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.600513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.600706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.600730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.600835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.600856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.601895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.601917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.602855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.602878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.603937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.603960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.155 qpair failed and we were unable to recover it. 00:28:02.155 [2024-10-17 19:35:25.604083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.155 [2024-10-17 19:35:25.604106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.604220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.604242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.604342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.604364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.604542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.604564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.604826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.604849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.604956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.604979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.605067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.605088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.605254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.605277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.605554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.605577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.605674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.605697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.605963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.605985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.606231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.606368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.606564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.606714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.606829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.606983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.607179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.607366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.607490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.607616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.607817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.607839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.608958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.608980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.609198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.609221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.609377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.609400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.609670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.609693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.609863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.609886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.610956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.610979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.611145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.611168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.611331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.156 [2024-10-17 19:35:25.611353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.156 qpair failed and we were unable to recover it. 00:28:02.156 [2024-10-17 19:35:25.611518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.611542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.611640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.611664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.611782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.611804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.611893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.611916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.612912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.612934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.613146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.613198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.613330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.613356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.613579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.613615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.613770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.613793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.613949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.613971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.614935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.614958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.615128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.615151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.615306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.615328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.615438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.615461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.615714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.615738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.615924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.615947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.616122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.616145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.616238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.616259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.616500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.616523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.616696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.616720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.616886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.616909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.617972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.617999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.618083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.618107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.157 [2024-10-17 19:35:25.618287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.157 [2024-10-17 19:35:25.618310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.157 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.618418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.618441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.618591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.618623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.618737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.618760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.618933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.618956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.619804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.619998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.620957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.620980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.621097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.621120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.621266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.621289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.621457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.621479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.621583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.621613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.621835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.621858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.622035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.622057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.622277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.622300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.622396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.622424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.622665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.622689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.622864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.622887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.623900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.158 [2024-10-17 19:35:25.623923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.158 qpair failed and we were unable to recover it. 00:28:02.158 [2024-10-17 19:35:25.624103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.624223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.624416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.624632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.624758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.624920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.624971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.625192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.625340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.625456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.625639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.625808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.625981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.626195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.626403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.626593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.626726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.626840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.626862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.627826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.627860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.628104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.628137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.628271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.628305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.628500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.628534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.628717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.628741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.628856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.628879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.629899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.629922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.630112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.630299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.630508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.630689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.630816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.630981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.631004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.159 qpair failed and we were unable to recover it. 00:28:02.159 [2024-10-17 19:35:25.631108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.159 [2024-10-17 19:35:25.631130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.631232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.631254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.631423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.631446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.631553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.631578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.631684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.631709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.631896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.631919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.632860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.632882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.633037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.633060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.633211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.633235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.633483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.633506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.633675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.633699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.633951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.633974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.634966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.634987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.635976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.635998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.636897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.636920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.637016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.637039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.637160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.637184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.637284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.637306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.160 [2024-10-17 19:35:25.637480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.160 [2024-10-17 19:35:25.637504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.160 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.637615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.637639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.637733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.637755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.637980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.638889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.638994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.639118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.639298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.639553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.639724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.639948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.639971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.640899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.640922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.641951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.641973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.642918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.642940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.643943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.643966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.644170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.161 [2024-10-17 19:35:25.644204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.161 qpair failed and we were unable to recover it. 00:28:02.161 [2024-10-17 19:35:25.644406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.644440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.644572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.644612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.644746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.644780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.644903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.644936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.645175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.645228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.645396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.645422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.645593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.645634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.645743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.645766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.645928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.645952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.646797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.646821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.648211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.648252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.648461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.648487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.648648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.648672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.648858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.648881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.648977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.649960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.649984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.650161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.650193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.650371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.650405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.650599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.650644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.650840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.650863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.650945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.650968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.651074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.651101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.651199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.651221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.651304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.651327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.651418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.162 [2024-10-17 19:35:25.651439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.162 qpair failed and we were unable to recover it. 00:28:02.162 [2024-10-17 19:35:25.651563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.651587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.651690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.651713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.651799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.651826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.651985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.652872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.652895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.653858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.653890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.654910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.654931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.655899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.655923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.656915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.656939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.657867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.163 [2024-10-17 19:35:25.657891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.163 qpair failed and we were unable to recover it. 00:28:02.163 [2024-10-17 19:35:25.658005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.658894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.658917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.659878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.659899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.660962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.660985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.661930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.661953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.662916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.662939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.663036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.663059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.663214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.663237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.663465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.663489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.663569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.663592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.164 [2024-10-17 19:35:25.663710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.164 [2024-10-17 19:35:25.663734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.164 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.663819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.663841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.664877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.664911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.665094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.665126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.665234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.665267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.665384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.665434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.665685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.665713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.665835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.665859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.666965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.666989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.667842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.667974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.668134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.668300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.668466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.668684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.668896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.668929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.669926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.669949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.670098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.670121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.165 qpair failed and we were unable to recover it. 00:28:02.165 [2024-10-17 19:35:25.670202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.165 [2024-10-17 19:35:25.670225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.670387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.670409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.670560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.670582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.670694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.670716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.670813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.670836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.670939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.670961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.671916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.671939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.672951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.672975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.673935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.673958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.674863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.674886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.675035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.675057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.675147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.675170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.675260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.675283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.675390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.166 [2024-10-17 19:35:25.675415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.166 qpair failed and we were unable to recover it. 00:28:02.166 [2024-10-17 19:35:25.675508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.675535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.675632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.675656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.676707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.676747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.676929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.676966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.677155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.677188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.678393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.678431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.678539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.678562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.678741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.678765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.678873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.678895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.678989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.679854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.679875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.680893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.680914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.681874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.681988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.682844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.682995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.683141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.683288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.683526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.683678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.167 [2024-10-17 19:35:25.683897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.167 [2024-10-17 19:35:25.683929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.167 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.684868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.684894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.685947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.685975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.686886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.686988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.687886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.687982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.688856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.688877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.168 [2024-10-17 19:35:25.689777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.168 qpair failed and we were unable to recover it. 00:28:02.168 [2024-10-17 19:35:25.689872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.689895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.690058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.690127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.690334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.690370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.690624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.690660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.690795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.690829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.691898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.691921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.692923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.692949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.693821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.693979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.694961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.694984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.695136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.695160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.695251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.695274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.695454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.695476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.695665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.695688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.695856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.695879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.696033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.696055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.696153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.696176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.696407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.696430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.169 [2024-10-17 19:35:25.696594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.169 [2024-10-17 19:35:25.696626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.169 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.696786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.696811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.696911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.696934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.697967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.697991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.698147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.698169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.698341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.698364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.698463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.698487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.698718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.698743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.698894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.698917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.699901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.699923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.700958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.700982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.701951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.701974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.702130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.702152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.702261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.702284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.702436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.702459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.702542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.702564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.702840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.702864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.703021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.703043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.170 [2024-10-17 19:35:25.703206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.170 [2024-10-17 19:35:25.703229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.170 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.703396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.703418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.703579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.703608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.703712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.703736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.703827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.703849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.704901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.704924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.705940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.705962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.706892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.706915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.707871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.707893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.708881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.708904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.709058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.709081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.171 qpair failed and we were unable to recover it. 00:28:02.171 [2024-10-17 19:35:25.709259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.171 [2024-10-17 19:35:25.709281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.709462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.709486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.709578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.709605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.709703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.709727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.709830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.709854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.709965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.709987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.710172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.710358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.710488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.710703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.710829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.710983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.711006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.711158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.711179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.712160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.712198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.712374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.712400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.712510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.712534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.712693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.712718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.712870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.712893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.713066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.713090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.713192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.713216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.713473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.713546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.713712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.713753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.713904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.713940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.714127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.714162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.714287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.714320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.714509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.714542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.714680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.714716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.714845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.714880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.715752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.715780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.719621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.719664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.719936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.719962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.720159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.720306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.720440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.720641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.172 [2024-10-17 19:35:25.720830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.172 qpair failed and we were unable to recover it. 00:28:02.172 [2024-10-17 19:35:25.720999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.721962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.721986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.722130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.722319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.722507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.722753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.722888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.722988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.723118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.723367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.723491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.723766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.723902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.723927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.724969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.724992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.725975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.725999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.726903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.726986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.727009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.727188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.173 [2024-10-17 19:35:25.727213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.173 qpair failed and we were unable to recover it. 00:28:02.173 [2024-10-17 19:35:25.727305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.727326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.727481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.727506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.729654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.729685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.729778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.729801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.729986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.730926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.730940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.731960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.731975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.732912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.732928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.733943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.733957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.734096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.734113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.734185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.734200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.174 [2024-10-17 19:35:25.734330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.174 [2024-10-17 19:35:25.734347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.174 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.735609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.735627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.735778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.735794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.735933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.735949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.736854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.736994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.737952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.737970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.738109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.738126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.738208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.738223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.738314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.738329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.740609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.740629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.740769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.740782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.740843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.740856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.741956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.175 [2024-10-17 19:35:25.741968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.175 qpair failed and we were unable to recover it. 00:28:02.175 [2024-10-17 19:35:25.742091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.742932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.742944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.743963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.743973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.744955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.744966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.176 [2024-10-17 19:35:25.745744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.176 qpair failed and we were unable to recover it. 00:28:02.176 [2024-10-17 19:35:25.745805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.745815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.745947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.745959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.746043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.746054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.746134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.746144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.746268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.746279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.746401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.746413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.746611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.746623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.749622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.749643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.749719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.749731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.749925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.749937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.750928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.750940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.751942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.751952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.752927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.752937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.753031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.753121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.753216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.753300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.177 [2024-10-17 19:35:25.753394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.177 qpair failed and we were unable to recover it. 00:28:02.177 [2024-10-17 19:35:25.753450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.753461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.753518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.753528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.753666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.753678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.753807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.753820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.753951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.753964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.754963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.754974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.755965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.755978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.756916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.756926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.178 qpair failed and we were unable to recover it. 00:28:02.178 [2024-10-17 19:35:25.757621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.178 [2024-10-17 19:35:25.757634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.757766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.757778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.757998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.758010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.758158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.758171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.758245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.758256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.758314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.758326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.759613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.759636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.759722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.759734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.759965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.759985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.760913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.760930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.761909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.761921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.762071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.762104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.763984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.763997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.764073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.764085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.764230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.764243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.764321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.764333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.764413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.764424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.179 [2024-10-17 19:35:25.764479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.179 [2024-10-17 19:35:25.764490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.179 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.764630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.764642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.764713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.764727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.764815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.764828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.764893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.764905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.764966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.764978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.765969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.765987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.766975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.766986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.767858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.767992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.768003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.768067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.768078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.768139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.768151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.768276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.768288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.180 qpair failed and we were unable to recover it. 00:28:02.180 [2024-10-17 19:35:25.768357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.180 [2024-10-17 19:35:25.768369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.768984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.768996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.769936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.769947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.770004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.770015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.770076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.770113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.770268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.770335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.770549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb56be0 is same with the state(6) to be set 00:28:02.181 [2024-10-17 19:35:25.770758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.770821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.771959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.771994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.772131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.772165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.772420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.772455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.772557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.772591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.772732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.772767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.772878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.772911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.181 qpair failed and we were unable to recover it. 00:28:02.181 [2024-10-17 19:35:25.773549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.181 [2024-10-17 19:35:25.773560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.773625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.773636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.773771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.773782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.773833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.773845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.773926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.773937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.773994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.774860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.774999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.775208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.775418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.775560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.775789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.775954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.775988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.776966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.776999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.777916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.777991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.778207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.778441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.778594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.778815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.778955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.182 [2024-10-17 19:35:25.778988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.182 qpair failed and we were unable to recover it. 00:28:02.182 [2024-10-17 19:35:25.779104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.779307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.779520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.779727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.779869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.779949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.779964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.780736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.780770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.781050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.781199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.781357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.781565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.781795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.781987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.782020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.782197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.782230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.782343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.782376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.782510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.782545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.782838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.782873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.783932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.783970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.183 [2024-10-17 19:35:25.784819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.183 qpair failed and we were unable to recover it. 00:28:02.183 [2024-10-17 19:35:25.784974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.785031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.785258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.785311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.785494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.785532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.785703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.785720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.785824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.785841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.785987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.786960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.786991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.787102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.787118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.787316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.787349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.787491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.787530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.787717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.787754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.787869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.787902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.788087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.788121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.788305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.788337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.788569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.788613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.788835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.788852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.788927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.788968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.184 qpair failed and we were unable to recover it. 00:28:02.184 [2024-10-17 19:35:25.789788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.184 [2024-10-17 19:35:25.789805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.789873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.789887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.789986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.790019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.790260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.790294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.790417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.790450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.790639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.790674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.790786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.790819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.791011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.791044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.791167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.791199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.791445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.791479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.791582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.791622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.791736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.791769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.792778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.792988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.793870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.793889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.794959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.794993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.795195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.795216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.185 [2024-10-17 19:35:25.795383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.185 [2024-10-17 19:35:25.795402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.185 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.795567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.795587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.795754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.795774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.795868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.795887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.795994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.796968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.796988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.797943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.797962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.798815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.798834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.799943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.799976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.800223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.800257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.800525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.800557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.800720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.800755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.800946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.800979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.801152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.801186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.801426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.801454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.801559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.801582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.801765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.801797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.801922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.801954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.186 [2024-10-17 19:35:25.802152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.186 [2024-10-17 19:35:25.802185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.186 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.802312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.802355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.802519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.802542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.802725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.802749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.802858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.802902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.803121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.803262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.803463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.803631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.803784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.803994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.804017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.804169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.804195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.804364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.804396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.804663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.804698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.804815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.804848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.804991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.805023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.805211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.805244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.805450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.805483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.805668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.805703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.805903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.805936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.806056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.806087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.806256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.806290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.806551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.806575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.806739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.806767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.806921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.806955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.807128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.807161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.807295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.807327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.807500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.807534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.807768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.807804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.807998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.808031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.808268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.808301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.808543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.808576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.808770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.808804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.808980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.809013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.809211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.809245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.809370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.187 [2024-10-17 19:35:25.809403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.187 qpair failed and we were unable to recover it. 00:28:02.187 [2024-10-17 19:35:25.809588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.809616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.809726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.809750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.809852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.809876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.810094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.810117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.810277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.810301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.810463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.810495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.810668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.810702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.810971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.811004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.811245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.811279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.811453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.811477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.811639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.811662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.811851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.811875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.812057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.812177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.812406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.812633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.812866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.812984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.813018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.813123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.813155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.813397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.813430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.813627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.813649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.813818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.813842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.814073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.814107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.814370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.814404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.814527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.814560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.814777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.814802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.814893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.814915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.815853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.815876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.816040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.816074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.816246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.188 [2024-10-17 19:35:25.816278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.188 qpair failed and we were unable to recover it. 00:28:02.188 [2024-10-17 19:35:25.816450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.816483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.816618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.816641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.816744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.816768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.816862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.816885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.817975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.817999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.818155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.818179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.818277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.818301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.818568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.818590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.818744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.818766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.818938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.818960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.819069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.819102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.819223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.819256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.819451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.819484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.819733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.819767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.820944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.820977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.821191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.821329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.821451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.821635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.189 [2024-10-17 19:35:25.821755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.189 qpair failed and we were unable to recover it. 00:28:02.189 [2024-10-17 19:35:25.821844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.821868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.822055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.822078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.822235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.822266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.822453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.822486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.822629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.822664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.822918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.822952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.823127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.823159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.823394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.823417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.823576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.823598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.823843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.823877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.824071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.824093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.824336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.824360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.824515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.824537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.824742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.824767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.824870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.824892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.825960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.825982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.826145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.826168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.826263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.826307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.826490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.826523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.826702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.826737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.826929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.826961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.827136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.827170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.827341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.827373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.827559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.827582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.827706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.827728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.827897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.827920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.828050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.828174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.828292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.828426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.190 [2024-10-17 19:35:25.828617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.190 qpair failed and we were unable to recover it. 00:28:02.190 [2024-10-17 19:35:25.828778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.828801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.828962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.828984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.829873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.829905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.830153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.830186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.830304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.830337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.830446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.830480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.830721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.830745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.830903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.830937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.831107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.831335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.831488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.831644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.831821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.831975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.832008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.832204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.832238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.832412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.832444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.832620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.832654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.832832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.832865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.833063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.833096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.833340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.833372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.833476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.833508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.833613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.833636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.833856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.833889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.834895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.834919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.835016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.835038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.191 [2024-10-17 19:35:25.835124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.191 [2024-10-17 19:35:25.835148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.191 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.835338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.835371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.835561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.835594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.835797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.835832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.836885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.836909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.837133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.837165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.837346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.837379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.837563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.837596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.837791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.837823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.837925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.837958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.838154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.838187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.838308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.838341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.838625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.838660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.838790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.838824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.839895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.839918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.840076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.840098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.840192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.840215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.840324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.840363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.840656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.840694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.840889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.840923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.841183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.841217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.841335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.841368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.841660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.841685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.192 qpair failed and we were unable to recover it. 00:28:02.192 [2024-10-17 19:35:25.841768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.192 [2024-10-17 19:35:25.841790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.841902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.841925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.842119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.842142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.842323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.842347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.842563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.842586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.842749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.842771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.842860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.842882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.843533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.843568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.843779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.843805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.844080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.844104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.844186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.844208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.844424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.844450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.844675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.844700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.844861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.844885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.845953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.845975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.846960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.846983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.847074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.847098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.847253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.847276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.847425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.847449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.847637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.847673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.847903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.847937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.848060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.848094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.848270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.848293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.848454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.848492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.848714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.848748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.848886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.848924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.849046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.849081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.849262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.849295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.849495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.849528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.849673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.193 [2024-10-17 19:35:25.849696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.193 qpair failed and we were unable to recover it. 00:28:02.193 [2024-10-17 19:35:25.849865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.849887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.850937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.850970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.851149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.851182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.851307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.851340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.851506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.851551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.851838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.851862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.851977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.852156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.852330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.852492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.852705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.852864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.852898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.853020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.853053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.853180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.853220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.853378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.853401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.853530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.853563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.853765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.853799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.854868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.854891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.194 [2024-10-17 19:35:25.855929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.194 [2024-10-17 19:35:25.855963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.194 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.856142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.856176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.856362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.856394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.856520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.856553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.856695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.856729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.856852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.856885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.857938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.857963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.858096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.858304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.858509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.858661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.858807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.858998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.859031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.859223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.859255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.859431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.859464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.859663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.859699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.859894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.859927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.860887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.860909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.861882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.861913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.862158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.862191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.862381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.862414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.862572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.862661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.862895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.862934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.863078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.863112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.863297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.863330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.863439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.195 [2024-10-17 19:35:25.863471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.195 qpair failed and we were unable to recover it. 00:28:02.195 [2024-10-17 19:35:25.863591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.863640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.863752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.863786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.863980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.864021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.864233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.864269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.864473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.864509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.864700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.864735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.864863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.864895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.865070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.865104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.865361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.865383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.865550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.865573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.865727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.865759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.865857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.865881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.866945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.866967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.867858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.867891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.868822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.868861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.869968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.869999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.196 [2024-10-17 19:35:25.870103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.196 [2024-10-17 19:35:25.870135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.196 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.870325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.870347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.870580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.870610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.870784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.870806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.870891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.870914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.871941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.871964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.872873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.872897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.873132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.873164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.873343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.873376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.873554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.873588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.873775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.873798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.873896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.873919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.874776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.874799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.875775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.875808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.876026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.876059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.876163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.876196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.876390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.876423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.876616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.876641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.197 qpair failed and we were unable to recover it. 00:28:02.197 [2024-10-17 19:35:25.876757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.197 [2024-10-17 19:35:25.876780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.876936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.876958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.877158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.877308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.877456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.877629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.877835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.877997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.878019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.878117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.878144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.878299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.878333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.878462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.878496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.878739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.878773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.879874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.879895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.880938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.880961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.881212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.881246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.881361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.881394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.881559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.881591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.881779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.881802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.882031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.882064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.882295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.882327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.882449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.882471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.882636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.882661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.882815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.882847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.198 [2024-10-17 19:35:25.883053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.198 [2024-10-17 19:35:25.883086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.198 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.883346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.883378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.883635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.883660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.883742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.883763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.883929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.883953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.884867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.884889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.885041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.885064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.885152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.885174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.885400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.885432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.885675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.885710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.885919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.885954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.886144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.886178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.886281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.886305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.886475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.886498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.886679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.886713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.886907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.886939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.887057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.887089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.887353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.887376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.887488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.887521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.887716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.887751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.887936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.887970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.888102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.888135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.888257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.888289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.199 [2024-10-17 19:35:25.888403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.199 [2024-10-17 19:35:25.888435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.199 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.888629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.888663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.888929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.888962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.889075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.889108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.889237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.889260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.889431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.889463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.889670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.889704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.889898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.889931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.890949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.890972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.891870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.891893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.892955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.892987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.893198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.893231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.893356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.893388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.893523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.893556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.893717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.893740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.893895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.893918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.894931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.894953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.895076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.895099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.895324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.895358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.895477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.895509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.200 qpair failed and we were unable to recover it. 00:28:02.200 [2024-10-17 19:35:25.895708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.200 [2024-10-17 19:35:25.895742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.895927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.895959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.896185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.896217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.896422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.896461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.896573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.896596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.896713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.896737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.896832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.896854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.897027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.897061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.897259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.897291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.897475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.897508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.897688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.897712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.897923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.897947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.898945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.898978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.899176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.899211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.899415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.899447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.899562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.899585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.899722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.899747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.899927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.899960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.900076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.900108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.900343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.900375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.900494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.900527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.900698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.900723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.201 [2024-10-17 19:35:25.900931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.201 [2024-10-17 19:35:25.900965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.201 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.901097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.901131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.901424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.901459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.901624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.901661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.901829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.901852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.902040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.902062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.902308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.902341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.492 [2024-10-17 19:35:25.902524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.492 [2024-10-17 19:35:25.902558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.492 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.902690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.902723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.902904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.902927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.903033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.903057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.903213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.903253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.903396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.903429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.903607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.903641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.903893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.903926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.904056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.904094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.904279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.904312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.904553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.904586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.904775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.904798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.904905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.904938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.905947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.905979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.906947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.906969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.907826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.907849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.908940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.908963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.909060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.909083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.909265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.909288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.909509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.493 [2024-10-17 19:35:25.909543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.493 qpair failed and we were unable to recover it. 00:28:02.493 [2024-10-17 19:35:25.909799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.909833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.910971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.910994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.911159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.911197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.911387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.911420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.911621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.911656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.911839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.911862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.912058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.912225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.912363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.912594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.912739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.912988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.913021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.913267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.913300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.913437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.913460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.913629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.913669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.913924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.913957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.914140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.914173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.914298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.914322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.914502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.914524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.914745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.914768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.914877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.914899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.915002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.915026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.915248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.915270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.915450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.915472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.915580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.915626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.915718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.915741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.916975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.916997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.917099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.917123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.917288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.917311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.494 [2024-10-17 19:35:25.917403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.494 [2024-10-17 19:35:25.917424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.494 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.917575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.917597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.917722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.917745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.917965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.917996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.918940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.918963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.919140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.919162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.919280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.919303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.919398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.919422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.919607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.919632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.919825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.919847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.920031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.920055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.920138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.920160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.920395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.920428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.920609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.920642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.920854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.920887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.921105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.921257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.921470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.921704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.921855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.921981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.922186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.922354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.922494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.922771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.922940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.922973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.923165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.923198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.923308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.923339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.923527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.923551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.923707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.923731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.923921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.923953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.924140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.924184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.924374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.924408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.924610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.924643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.924755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.924788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.924976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.925007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.495 qpair failed and we were unable to recover it. 00:28:02.495 [2024-10-17 19:35:25.925252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.495 [2024-10-17 19:35:25.925286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.925567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.925591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.925776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.925799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.925910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.925932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.926144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.926176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.926348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.926381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.926584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.926613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.926773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.926797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.926952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.926974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.927067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.927090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.927213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.927236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.927395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.927427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.927620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.927654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.927826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.927860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.928056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.928088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.928328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.928361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.928478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.928513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.928649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.928682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.928889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.928912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.929797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.929978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.930011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.930207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.930241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.930424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.930456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.930579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.930621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.930803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.930826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.931023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.931056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.931323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.931357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.931574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.931635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.931806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.931844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.932850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.932873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.933051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.496 [2024-10-17 19:35:25.933083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.496 qpair failed and we were unable to recover it. 00:28:02.496 [2024-10-17 19:35:25.933211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.933244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.933435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.933468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.933575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.933619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.933761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.933804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.933896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.933918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.934099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.934132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.934252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.934285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.934554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.934589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.934787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.934812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.934920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.934943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.935941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.935964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.936157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.936358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.936464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.936592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.936797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.936970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.937003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.937193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.937232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.937350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.937383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.937568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.937610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.937865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.937898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.938023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.938055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.938175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.938208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.938330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.938365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.938622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.938658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.938770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.938803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.939053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.939086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.939218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.939249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.497 [2024-10-17 19:35:25.939376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.497 [2024-10-17 19:35:25.939409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.497 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.939678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.939711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.939859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.939881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.940055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.940077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.940331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.940364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.940501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.940523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.940691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.940725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.940909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.940941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.941184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.941217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.941332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.941364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.941566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.941588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.941766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.941789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.941960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.941991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.942202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.942236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.942470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.942495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.942654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.942688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.942806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.942846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.943061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.943279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.943406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.943524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.943738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.943977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.944948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.944971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.945153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.945186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.945395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.945428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.945617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.945651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.945850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.945883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.946062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.946095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.946275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.946309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.946558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.946593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.946803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.946843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.946949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.946973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.947138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.947161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.947246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.947270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.498 qpair failed and we were unable to recover it. 00:28:02.498 [2024-10-17 19:35:25.947366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.498 [2024-10-17 19:35:25.947409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.947519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.947552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.947829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.947862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.947991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.948026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.948225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.948258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.948436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.948468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.948714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.948749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.948924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.948958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.949225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.949258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.949395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.949429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.949608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.949641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.949755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.949789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.949964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.949986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.950155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.950178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.950399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.950432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.950545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.950579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.950845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.950890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.950996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.951871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.951892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.952058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.952082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.952298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.952321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.952417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.952440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.952526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.952549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.952738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.952763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.499 [2024-10-17 19:35:25.953671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.499 [2024-10-17 19:35:25.953693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.499 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.953799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.953822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.953973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.953996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.954102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.954123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.954371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.954395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.954516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.954540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.954706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.954730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.954899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.954923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.955890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.955914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.956143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.956166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.956339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.956362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.956473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.956496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.956597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.956628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.956780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.956802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.957886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.957920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.958945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.958969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.959121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.959144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.959295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.959317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.959487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.959510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.959683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.500 [2024-10-17 19:35:25.959708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.500 qpair failed and we were unable to recover it. 00:28:02.500 [2024-10-17 19:35:25.959828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.959850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.959933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.959955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.960867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.960889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.961972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.961995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.962170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.962203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.962396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.962428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.962665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.962699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.962887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.962919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.963920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.963943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.964113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.964136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.964266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.964299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.964415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.964448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.964639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.964674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.964867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.964901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.965071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.965103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.501 qpair failed and we were unable to recover it. 00:28:02.501 [2024-10-17 19:35:25.965279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.501 [2024-10-17 19:35:25.965311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.965487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.965509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.965685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.965709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.965814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.965840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.966015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.966039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.966141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.966174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.966384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.966418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.966597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.966638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.966894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.966917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.967810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.967843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.968927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.968959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.969137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.969170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.969349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.969386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.969517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.969558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.969674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.969698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.969789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.969813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.970942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.970975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.971213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.971246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.971378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.971411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.971542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.502 [2024-10-17 19:35:25.971566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.502 qpair failed and we were unable to recover it. 00:28:02.502 [2024-10-17 19:35:25.971744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.971767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.971873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.971895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.971980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.972884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.972991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.973848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.973875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.974119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.974144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.974312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.974334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.974510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.974543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.974792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.974827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.975042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.975076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.975251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.975284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.975463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.975496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.975759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.975783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.976008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.976040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.976224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.976258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.976374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.976406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.976594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.976637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.976880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.976914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.977113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.977145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.977258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.977293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.977463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.977495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.977680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.977713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.977978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.978012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.978208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.978243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.978432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.978466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.978590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.978631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.978917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.978950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.979121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.979153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.979370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.979403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.979619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.503 [2024-10-17 19:35:25.979654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.503 qpair failed and we were unable to recover it. 00:28:02.503 [2024-10-17 19:35:25.979859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.979901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.980016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.980040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.980143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.980166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.980323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.980346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.980591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.980635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.980826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.980859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.981042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.981075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.981198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.981232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.981437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.981469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.981611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.981647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.981833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.981866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.982106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.982139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.982331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.982363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.982485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.982518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.982646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.982680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.982854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.982893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.983088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.983112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.983219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.983252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.983443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.983476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.983750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.983785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.983922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.983955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.984070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.984104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.984274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.984307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.984499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.984531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.984731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.984765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.985016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.985049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.985171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.985204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.985446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.985480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.985674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.985699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.985877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.985899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.986021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.986044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.504 [2024-10-17 19:35:25.986145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.504 [2024-10-17 19:35:25.986168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.504 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.986295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.986419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.986558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.986705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.986829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.986986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.987162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.987360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.987498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.987726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.987869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.987908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.988864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.988896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.989111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.989135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.989252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.989275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.989387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.989411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.989648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.989672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.989836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.989877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.990072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.990106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.990219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.990253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.990396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.990429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.990552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.990585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.990792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.990826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.991890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.991924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.992110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.992133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.992370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.992393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.992506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.992529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.992686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.992710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.993004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.993037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.993175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.993209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.993451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.993483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.505 qpair failed and we were unable to recover it. 00:28:02.505 [2024-10-17 19:35:25.993621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.505 [2024-10-17 19:35:25.993645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.993816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.993838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.993927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.993949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.994133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.994335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.994499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.994684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.994812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.994979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.995012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.995180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.995214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.995393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.995426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.995670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.995699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.995801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.995823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.995999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.996023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.996186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.996208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.996373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.996406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.996523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.996557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.996760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.996794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.996977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.997185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.997331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.997539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.997758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.997922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.997946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.998075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.998307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.998496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.998632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.506 [2024-10-17 19:35:25.998764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.506 qpair failed and we were unable to recover it. 00:28:02.506 [2024-10-17 19:35:25.998930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:25.998954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:25.999113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:25.999155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:25.999370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:25.999404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:25.999540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:25.999572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:25.999847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:25.999880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.000055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.000089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.000216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.000248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.000427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.000460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.000647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.000683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.000892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.000935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.001058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.001091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.001281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.001313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.001533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.001567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.001695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.001719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.001867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.001890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.002044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.002076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.002330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.002365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.002488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.002522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.002786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.002820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.003028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.003061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.003271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.003303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.003519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.003552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.003683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.003709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.003932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.003956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.004797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.004831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.005026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.005059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.005295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.005328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.005630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.005664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.005836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.005880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.005984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.006007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.006174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.507 [2024-10-17 19:35:26.006207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.507 qpair failed and we were unable to recover it. 00:28:02.507 [2024-10-17 19:35:26.006400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.006433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.006561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.006584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.006705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.006730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.006971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.006994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.007159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.007182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.007290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.007323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.007494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.007529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.007671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.007705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.007902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.007934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.008195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.008228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.008333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.008364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.008482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.008516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.008695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.008736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.008915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.008937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.009952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.009974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.010191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.010224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.010400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.010431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.010619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.010653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.010829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.010861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.011146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.011178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.011444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.011477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.011739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.011763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.011932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.011955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.012128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.012162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.012356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.012388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.012521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.012552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.012807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.012841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.013017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.013050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.013164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.013197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.013330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.013364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.013549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.013581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.013855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.013888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.014060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.014083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.014249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.014272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.508 [2024-10-17 19:35:26.014470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.508 [2024-10-17 19:35:26.014503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.508 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.014676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.014701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.014813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.014835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.014944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.014968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.015139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.015162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.015261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.015284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.015504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.015537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.015660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.015695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.015821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.015853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.016034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.016067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.016242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.016275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.016451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.016483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.016663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.016698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.016872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.016904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.017091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.017125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.017308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.017341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.017531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.017566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.017780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.017804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.018907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.018930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.019059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.019246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.019384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.019598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.019757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.019999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.020189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.020404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.020565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.020802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.020949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.020982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.021177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.021209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.021384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.021417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.021588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.021633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.021766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.021799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.021928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.021960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.509 [2024-10-17 19:35:26.022172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.509 [2024-10-17 19:35:26.022195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.509 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.022285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.022306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.022532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.022567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.022711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.022750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.022870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.022903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.023084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.023118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.023299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.023331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.023572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.023616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.023804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.023827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.023999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.024022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.024183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.024216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.024334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.024366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.024565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.024597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.024875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.024909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.025938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.025970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.026157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.026191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.026320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.026354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.026527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.026561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.026714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.026747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.026924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.026948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.027913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.027936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.028129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.028162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.028358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.028390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.028505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.028538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.028787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.028822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.028933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.028956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.029122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.029144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.029405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.029438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.029682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.029716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.029948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.029970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.030131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.030153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.510 qpair failed and we were unable to recover it. 00:28:02.510 [2024-10-17 19:35:26.030331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.510 [2024-10-17 19:35:26.030365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.030551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.030584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.030785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.030818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.031057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.031130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.031265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.031303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.031548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.031581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.031786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.031812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.032923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.032956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.033127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.033162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.033288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.033322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.033447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.033479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.033654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.033688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.033817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.033849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.034023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.034057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.034233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.034266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.034383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.034416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.034540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.034572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.034880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.034913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.035088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.035120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.035247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.035280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.035408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.035441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.035567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.035622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.035906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.035946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.036126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.036158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.036349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.036384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.036508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.036546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.036763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.036796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.036974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.037007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.037135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.037168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.511 [2024-10-17 19:35:26.037347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.511 [2024-10-17 19:35:26.037381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.511 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.037555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.037577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.037695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.037719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.037899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.037921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.038889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.038995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.039174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.039371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.039524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.039672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.039821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.039854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.040118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.040150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.040393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.040427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.040531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.040564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.040720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.040755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.040874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.040907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.041111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.041294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.041432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.041597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.041818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.041987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.042144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.042289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.042521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.042711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.042863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.042897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.043094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.043126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.043260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.043294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.043539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.043571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.043771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.043795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.043969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.044002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.044256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.044328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.044543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.044580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.044788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.044822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.512 [2024-10-17 19:35:26.044952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.512 [2024-10-17 19:35:26.044986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.512 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.045936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.045959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.046120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.046152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.046354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.046387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.046512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.046544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.046735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.046769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.046959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.047964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.047986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.048099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.048131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.048247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.048281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.048526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.048559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.048765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.048798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.048984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.049197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.049322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.049485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.049750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.049901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.049925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.050055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.050296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.050410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.050629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.050857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.050976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.051008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.051196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.051230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.051422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.051456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.051727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.051752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.051927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.051966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.052095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.052129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.052252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.052285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.052523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.052566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.513 qpair failed and we were unable to recover it. 00:28:02.513 [2024-10-17 19:35:26.052679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.513 [2024-10-17 19:35:26.052703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.052886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.052920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.053104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.053138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.053311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.053343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.053474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.053507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.053622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.053656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.053923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.053956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.054176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.054209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.054387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.054420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.054624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.054659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.054851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.054888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.055048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.055190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.055463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.055678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.055831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.055995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.056113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.056335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.056542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.056790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.056926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.056949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.057111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.057134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.057240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.057281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.057532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.057618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.057780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.057819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.058010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.058045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.058171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.058205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.058472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.058505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.058703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.058739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.058994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.059028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.059147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.059180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.059313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.059346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.059575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.059622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.059814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.059847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.060023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.060056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.060264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.060301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.060492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.060531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.060658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.060692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.060869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.060901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.061039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.514 [2024-10-17 19:35:26.061073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.514 qpair failed and we were unable to recover it. 00:28:02.514 [2024-10-17 19:35:26.061220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.061242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.061351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.061374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.061535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.061560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.061743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.061778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.061965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.061999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.062194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.062228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.062358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.062391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.062588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.062634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.062875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.062908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.063919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.063961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.064077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.064109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.064291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.064325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.064498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.064531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.064737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.064772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.064919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.064951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.065919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.065945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.066143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.066175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.066301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.066336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.066621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.066658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.066861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.066894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.067037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.067072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.067185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.067220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.067392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.067426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.067620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.067655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.067901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.067944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.068069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.515 [2024-10-17 19:35:26.068103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.515 qpair failed and we were unable to recover it. 00:28:02.515 [2024-10-17 19:35:26.068344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.068377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.068483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.068529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.068649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.068682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.068863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.068896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.069181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.069215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.069329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.069362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.069542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.069576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.069713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.069750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.069965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.069999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.070122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.070155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.070265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.070291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.070458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.070482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.070610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.070645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.070911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.070945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.071155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.071190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.071312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.071347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.071622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.071658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.071801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.071833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.071943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.071967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.072148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.072182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.072435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.072470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.072572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.072620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.072863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.072896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.073076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.073120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.073287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.073310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.073462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.073495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.073680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.073714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.073859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.073882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.074060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.074100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.074355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.074387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.074644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.074678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.074819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.074851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.075025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.075047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.075223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.075255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.075443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.075475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.075656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.075690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.075805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.075829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.076046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.076080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.076318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.076351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.076487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.076522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.076675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.516 [2024-10-17 19:35:26.076710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.516 qpair failed and we were unable to recover it. 00:28:02.516 [2024-10-17 19:35:26.076920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.076953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.077132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.077166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.077352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.077385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.077500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.077534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.077667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.077701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.077910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.077944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.078084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.078117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.078259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.078282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.078452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.078485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.078697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.078731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.078859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.078884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.079057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.079089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.079272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.079306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.079484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.079517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.079742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.079782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.079897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.079929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.080110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.080143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.080328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.080361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.080495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.080529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.080665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.080700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.080881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.080913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.081029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.081061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.081245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.081268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.081438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.081470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.081648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.081682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.081904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.081939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.082900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.082924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.083098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.083130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.083385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.083417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.083563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.083597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.083745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.083777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.083905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.083939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.084066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.084098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.084272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.084295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.084463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.084495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.517 [2024-10-17 19:35:26.084681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.517 [2024-10-17 19:35:26.084716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.517 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.084837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.084869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.085959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.085991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.086108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.086142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.086349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.086380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.086562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.086596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.086736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.086778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.086881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.086905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.087065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.087097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.087218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.087251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.087371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.087408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.087532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.087565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.087788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.087822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.088794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.088827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.089094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.089137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.089289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.089312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.089489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.089511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.089667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.089698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.089889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.089923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.090052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.090086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.090275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.090307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.090442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.090477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.090720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.090755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.090938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.090970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.091933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.518 [2024-10-17 19:35:26.091957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.518 qpair failed and we were unable to recover it. 00:28:02.518 [2024-10-17 19:35:26.092062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.092192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.092383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.092562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.092710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.092901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.092925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.093022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.093047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.093167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.093200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.093375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.093409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.093583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.093627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.093834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.093857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.094014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.094037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.094194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.094219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.094322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.094345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.094538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.094570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.094830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.094905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.095135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.095174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.095301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.095334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.095506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.095533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.095766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.095790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.095947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.095970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.096069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.096112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.096364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.096397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.096572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.096643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.096835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.096867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.096975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.097008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.097252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.097283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.097477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.097506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.097642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.097675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.097858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.097888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.098081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.098112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.098303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.098323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.098407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.098427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.098583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.098608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.098847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.098878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.099065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.099096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.099224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.519 [2024-10-17 19:35:26.099256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.519 qpair failed and we were unable to recover it. 00:28:02.519 [2024-10-17 19:35:26.099373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.099404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.099572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.099613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.099741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.099771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.099898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.099919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.100892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.100923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.101114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.101146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.101257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.101289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.101427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.101459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.101645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.101679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.101852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.101883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.102119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.102151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.102334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.102364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.102552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.102584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.102784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.102815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.103005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.103037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.103155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.103189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.103309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.103334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.103557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.103641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.103827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.103900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.104017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.104042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.104236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.104269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.104478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.104510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.104687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.104722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.105014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.105047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.105229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.105252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.105379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.105411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.105595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.105641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.105912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.105943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.106901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.520 [2024-10-17 19:35:26.106923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.520 qpair failed and we were unable to recover it. 00:28:02.520 [2024-10-17 19:35:26.107016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.107039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.107280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.107304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.107415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.107438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.107613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.107648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.107776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.107809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.107985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.108018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.108142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.108175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.108371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.108403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.108533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.108566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.108783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.108824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.109902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.109935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.110043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.110075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.110290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.110321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.110520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.110552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.110744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.110779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.110909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.110942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.111194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.111237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.111342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.111374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.111556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.111588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.111795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.111828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.111962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.111994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.112111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.112145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.112340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.112376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.112568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.112615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.112824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.112858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.112982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.113014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.113224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.113257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.113430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.113464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.113585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.113634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.113810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.113844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.113965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.114000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.114187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.114219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.114412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.114445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.114574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.114619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.114861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.114897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.521 [2024-10-17 19:35:26.115078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.521 [2024-10-17 19:35:26.115111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.521 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.115304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.115337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.115453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.115486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.115732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.115767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.115911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.115944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.116149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.116174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.116283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.116305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.116416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.116439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.116623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.116659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.116855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.116888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.117078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.117110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.117301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.117333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.117468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.117501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.117675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.117710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.117903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.117935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.118210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.118360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.118504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.118722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.118835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.118990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.119110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.119238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.119446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.119596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.119828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.119860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.120069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.120103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.120348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.120380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.120564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.120597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.120803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.120836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.120959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.120983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.121149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.121173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.121284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.121306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.121459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.121482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.121646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.121671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.121851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.121885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.122085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.122120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.122325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.122359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.122489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.122522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.122716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.122751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.122929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.522 [2024-10-17 19:35:26.122963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.522 qpair failed and we were unable to recover it. 00:28:02.522 [2024-10-17 19:35:26.123146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.123180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.123419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.123453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.123653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.123688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.123875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.123907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.124024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.124057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.124245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.124278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.124467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.124501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.124643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.124686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.124891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.124926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.125059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.125093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.125337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.125371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.125512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.125546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.125665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.125699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.125872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.125904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.126084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.126118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.126303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.126337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.126580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.126624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.126893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.126928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.127176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.127211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.127391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.127423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.127666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.127700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.127899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.127933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.128130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.128164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.128335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.128368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.128485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.128520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.128704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.128741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.128951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.128985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.129122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.129156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.129330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.129362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.129565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.129599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.129733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.129758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.129930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.129953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.130854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.130889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.131081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.131114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.131282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.131316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.523 [2024-10-17 19:35:26.131497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.523 [2024-10-17 19:35:26.131530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.523 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.131722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.131755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.132005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.132038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.132211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.132243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.132417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.132449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.132585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.132626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.132820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.132855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.133927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.133962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.134142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.134176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.134346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.134379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.134580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.134623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.134805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.134839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.135036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.135069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.135338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.135372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.135570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.135609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.135723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.135756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.136016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.136043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.136191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.136213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.136433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.136465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.136641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.136675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.136864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.136895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.137136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.137169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.137294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.137328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.137444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.137476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.137656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.137691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.137862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.137897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.138122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.138155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.138413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.138447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.138628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.138664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.138830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.138861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.139059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.139096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.139214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.524 [2024-10-17 19:35:26.139248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.524 qpair failed and we were unable to recover it. 00:28:02.524 [2024-10-17 19:35:26.139354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.139387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.139550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.139574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.139800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.139824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.139928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.139950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.140949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.140982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.141964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.141986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.142964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.142987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.143091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.143115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.143273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.143296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.143377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.143419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.143612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.143646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.143849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.143881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.144074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.144097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.144266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.144288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.144443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.144477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.144744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.144778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.144918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.144951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.145078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.145110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.145346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.145370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.145528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.145551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.145655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.145678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.145847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.145870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.146108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.146131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.146235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.146257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.146417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.525 [2024-10-17 19:35:26.146440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.525 qpair failed and we were unable to recover it. 00:28:02.525 [2024-10-17 19:35:26.146606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.146630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.146808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.146841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.146948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.146982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.147951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.147983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.148176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.148209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.148494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.148527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.148701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.148741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.148945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.148979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.149166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.149199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.149332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.149364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.149498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.149532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.149646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.149680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.149788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.149821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.150060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.150093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.150223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.150255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.150435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.150468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.150596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.150637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.150825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.150859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.151036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.151058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.151161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.151204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.151435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.151507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.151750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.151823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.152871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.152893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.153004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.153026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.153248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.153281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.153470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.153503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.153629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.153665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.153857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.153890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.154155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.154187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.526 [2024-10-17 19:35:26.154334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.526 [2024-10-17 19:35:26.154367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.526 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.154498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.154531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.154643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.154676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.154787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.154819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.155948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.155981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.156105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.156138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.156251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.156284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.156434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.156472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.156583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.156652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.156786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.156819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.157964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.157989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.158102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.158235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.158374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.158640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.158829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.158985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.159008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.159110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.159133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.159311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.159344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.159517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.159550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.159755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.159790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.160070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.160104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.160344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.160366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.160564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.160587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.160781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.160804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.160901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.160925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.161078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.161101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.161302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.161333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.161464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.527 [2024-10-17 19:35:26.161503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.527 qpair failed and we were unable to recover it. 00:28:02.527 [2024-10-17 19:35:26.161630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.161663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.161788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.161820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.161938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.161971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.162162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.162194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.162303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.162326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.162485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.162509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.162641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.162676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.162882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.162916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.163033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.163065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.163277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.163299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.163406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.163439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.163619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.163653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.163893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.163925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.164966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.164988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.165094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.165116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.165247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.165271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.165513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.165536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.165712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.165736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.165911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.165936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.166049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.166073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.166166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.166187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.166437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.166469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.166684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.166718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.166914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.166957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.167118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.167142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.167385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.167409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.167574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.167599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.167693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.167715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.167885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.167908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.168129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.168153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.168269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.528 [2024-10-17 19:35:26.168301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.528 qpair failed and we were unable to recover it. 00:28:02.528 [2024-10-17 19:35:26.168521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.168553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.168739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.168772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.168898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.168921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.169149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.169172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.169287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.169315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.169488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.169510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.169752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.169786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.169907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.169939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.170120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.170152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.170271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.170303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.170490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.170514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.170639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.170672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.170803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.170836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.171947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.171971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.172090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.172114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.172200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.172237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.172363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.172394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.172576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.172616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.172800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.172834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.173921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.173943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.174902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.174983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.175004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.175085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.175108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.175223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.529 [2024-10-17 19:35:26.175245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.529 qpair failed and we were unable to recover it. 00:28:02.529 [2024-10-17 19:35:26.175404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.175439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.175544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.175576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.175716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.175752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.176019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.176052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.176257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.176290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.176559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.176593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.176771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.176803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.176919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.176953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.177066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.177098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.177270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.177303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.177547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.177570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.177750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.177773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.177881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.177903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.178081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.178104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.178272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.178305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.178431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.178463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.178588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.178644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.178822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.178854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.179047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.179081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.179347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.179370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.179467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.179498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.179631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.179665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.179858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.179892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.180878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.180984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.181007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.181155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.181180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.181402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.181429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.181631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.181654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.181882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.181904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.182918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.182952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.530 [2024-10-17 19:35:26.183069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.530 [2024-10-17 19:35:26.183102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.530 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.183280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.183315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.183489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.183512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.183780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.183815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.183987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.184021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.184221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.184254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.184382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.184419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.184640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.184664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.184891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.184913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.185086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.185120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.185229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.185261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.185379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.185412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.185531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.185563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.185801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.185836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.186083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.186115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.186293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.186317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.186489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.186514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.186729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.186754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.186849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.186875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.187890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.187925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.188113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.188148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.188316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.188339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.188508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.188546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.188764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.188798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.188986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.189019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.189147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.189180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.189364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.189396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.189592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.189635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.189839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.189872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.190971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.190994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.191094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.191118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.531 [2024-10-17 19:35:26.191282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.531 [2024-10-17 19:35:26.191315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.531 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.191596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.191643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.191828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.191860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.191966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.191999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.192119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.192151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.192379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.192413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.192675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.192698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.192792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.192814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.192928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.192951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.193053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.193230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.193469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.193651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.193777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.193966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.194000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.194116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.194148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.194350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.194383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.194577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.194630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.194757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.194795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.195039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.195194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.195407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.195545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.195773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.195976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.196008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.196133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.196165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.196371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.196404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.196594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.196650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.196769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.196801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.196993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.197025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.197219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.197252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.197446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.197469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.197564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.197588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.197812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.197836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.198035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.198058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.198221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.198244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.198430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.532 [2024-10-17 19:35:26.198453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.532 qpair failed and we were unable to recover it. 00:28:02.532 [2024-10-17 19:35:26.198568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.198591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.198765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.198787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.198955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.198978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.199934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.199977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.200151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.200175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.200329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.200352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.200509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.200555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.200680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.200715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.200933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.200967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.201086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.201118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.201244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.201267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.201440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.201472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.201662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.201695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.201909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.201941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.202139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.202161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.202273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.202296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.202523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.202555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.202709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.202744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.202877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.202911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.203134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.203167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.203348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.203382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.203616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.203640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.203799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.203830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.203942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.203975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.204283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.204463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.204656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.204781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.204894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.204980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.205874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.205995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.533 [2024-10-17 19:35:26.206027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.533 qpair failed and we were unable to recover it. 00:28:02.533 [2024-10-17 19:35:26.206230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.206262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.206437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.206471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.206655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.206690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.206882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.206914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.207106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.207139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.207262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.207294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.207414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.207447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.207692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.207727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.207903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.207941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.208130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.208164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.208344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.208377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.208585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.208626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.208745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.208779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.208962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.208994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.209100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.209133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.209370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.209402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.209530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.209563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.209751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.209785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.209958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.209991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.210965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.210997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.211191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.211224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.211487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.211511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.211678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.211702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.211801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.211823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.211909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.211932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.212919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.212952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.213153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.213186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.213318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.213340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.213560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.213584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.213770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.213793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.213914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.534 [2024-10-17 19:35:26.213938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.534 qpair failed and we were unable to recover it. 00:28:02.534 [2024-10-17 19:35:26.214111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.214237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.214434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.214557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.214681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.214953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.214976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.215969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.215991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.216178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.216201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.216373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.216395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.216519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.216552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.216685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.216720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.216988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.217021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.217310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.217342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.217477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.217510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.217635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.217669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.217911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.217943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.218931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.218955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.219070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.219093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.219253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.219276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.219500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.219532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.219740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.219776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.219966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.219999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.220170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.220204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.220386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.220420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.220572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.220618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.220814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.220847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.221022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.221054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.221227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.221251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.221414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.221456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.535 qpair failed and we were unable to recover it. 00:28:02.535 [2024-10-17 19:35:26.221643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.535 [2024-10-17 19:35:26.221679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.221857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.221891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.222130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.222154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.222254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.222276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.222463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.222496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.222786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.222820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.222977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.223010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.223192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.223224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.223401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.223434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.223581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.223625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.223806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.223839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.224086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.224119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.224234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.224258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.224433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.224466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.224735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.224770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.225047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.225081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.225262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.225295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.225520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.225553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.225735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.225768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.225973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.226178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.226337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.226491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.226672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.226953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.226987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.227100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.227132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.227308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.227340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.227457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.227498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.227665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.227690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.227794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.227818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.228924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.228957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.229198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.229235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.229356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.229379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.229606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.229631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.229820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.229843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.230007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.536 [2024-10-17 19:35:26.230040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.536 qpair failed and we were unable to recover it. 00:28:02.536 [2024-10-17 19:35:26.230255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.230289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.230410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.230443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.230637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.230661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.230747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.230769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.230938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.230981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.231103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.231136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.231311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.231343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.231455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.231489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.231659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.231683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.231951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.231974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.232203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.232236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.232343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.232365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.232552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.232575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.232750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.232773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.232896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.232929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.233138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.233345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.233495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.233674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.233863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.233989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.234021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.234147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.234171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.234354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.234381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.234639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.234673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.234783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.234818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.235069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.235101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.235367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.235399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.235623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.235656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.235898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.235930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.236122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.236155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.236348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.236380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.236555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.236577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.236737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.236780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.236923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.236956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.537 [2024-10-17 19:35:26.237154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.537 [2024-10-17 19:35:26.237187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.537 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.237323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.237356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.237472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.237505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.237791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.237825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.238073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.238105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.238227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.238259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.238493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.238516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.238635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.238670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.238802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.238836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.239917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.239950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.240908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.240940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.241145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.241178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.241355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.241388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.241490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.241514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.241704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.241728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.241829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.241852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.242023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.242046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.242271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.242304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.242481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.242512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.242692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.242732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.242918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.242950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.243068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.243101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.243427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.243460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.243630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.243654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.243813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.243839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.244066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.538 [2024-10-17 19:35:26.244088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.538 qpair failed and we were unable to recover it. 00:28:02.538 [2024-10-17 19:35:26.244256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.244280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.244400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.244424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.244576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.244599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.244757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.244780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.245005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.245038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.245174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.245206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.245383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.245423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.539 [2024-10-17 19:35:26.245561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.539 [2024-10-17 19:35:26.245594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.539 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.245729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.245766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.246038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.246072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.246368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.246401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.246532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.246567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.246751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.246776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.246870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.246894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.247863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.247998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.248159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.248282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.248421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.248710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.248897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.248919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.249871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.249903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.250087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.250121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.250259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.250292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.874 [2024-10-17 19:35:26.250472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.874 [2024-10-17 19:35:26.250506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.874 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.250702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.250726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.250855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.250879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.251069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.251101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.251255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.251287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.251401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.251434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.251733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.251768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.251945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.251978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.252108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.252143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.252256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.252288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.252409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.252443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.252627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.252661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.252852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.252884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.253969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.253993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.254193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.254225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.254405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.254438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.254633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.254667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.254772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.254806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.254999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.255233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.255394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.255567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.255762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.255938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.255969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.256091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.256125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.256300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.256332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.256510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.256543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.256739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.256764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.256851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.256874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.257066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.257088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.257283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.257308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.875 [2024-10-17 19:35:26.257465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.875 [2024-10-17 19:35:26.257488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.875 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.257597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.257628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.257872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.257894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.257995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.258844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.258868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.259947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.259980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.260104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.260136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.260330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.260364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.260560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.260592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.260784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.260819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.261936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.261959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.262110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.262133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.262327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.262359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.262566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.262599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.262753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.262788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.262916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.262948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.263136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.263169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.263330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.263402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.263720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.876 [2024-10-17 19:35:26.263760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.876 qpair failed and we were unable to recover it. 00:28:02.876 [2024-10-17 19:35:26.263876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.263909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.264854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.264877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.265165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.265188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.265349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.265381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.265621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.265656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.265761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.265796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.265909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.265943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.266135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.266171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.266372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.266405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.266644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.266668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.266887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.266911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.267073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.267097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.267264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.267297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.267491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.267525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.267647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.267681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.267932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.267965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.268079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.268111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.268434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.268467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.268646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.268680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.268970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.269911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.269934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.270113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.270146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.270330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.270353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.270502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.270545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.270728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.270763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.271029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.271063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.271280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.877 [2024-10-17 19:35:26.271312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.877 qpair failed and we were unable to recover it. 00:28:02.877 [2024-10-17 19:35:26.271431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.271464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.271682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.271706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.271791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.271814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.271924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.271946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.272106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.272129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.272286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.272309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.272464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.272487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.272672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.272695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.272856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.272889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.273098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.273318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.273463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.273665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.273867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.273981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.274014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.274206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.274244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.274439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.274472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.274651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.274675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.274822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.274845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.275094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.275126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.275348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.275381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.275580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.275626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.275811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.275843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.276085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.276120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.276360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.276392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.276514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.276547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.276786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.276821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.276938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.276972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.277163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.277195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.277405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.277438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.277712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.277748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.277927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.277958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.278081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.278115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.278358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.278390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.278564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.278598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.278777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.878 [2024-10-17 19:35:26.278810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.878 qpair failed and we were unable to recover it. 00:28:02.878 [2024-10-17 19:35:26.278985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.279018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.279209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.279241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.279438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.279471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.279646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.279680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.279807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.279839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.280109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.280143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.280257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.280291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.280491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.280525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.280767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.280802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.280971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.281132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.281340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.281472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.281585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.281837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.281870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.282071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.282102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.282277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.282316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.282485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.282508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.282587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.282653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.282795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.282828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.283025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.283063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.283310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.283344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.283462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.283505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.283620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.283643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.283807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.283831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.284013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.284035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.284267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.284291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.284458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.284490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.284733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.284769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.284900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.284934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.285205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.285238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.879 [2024-10-17 19:35:26.285373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.879 [2024-10-17 19:35:26.285397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.879 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.285560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.285583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.285754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.285787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.285967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.286819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.286983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.287877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.287986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.288022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.288143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.288176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.288428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.288461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.288698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.288732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.288911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.288943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.289060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.289093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.289371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.289403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.289538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.289561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.289786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.289809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.289990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.290199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.290359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.290529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.290672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.290850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.290872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.291887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.291920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.292104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.292136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.292326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.292360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.880 [2024-10-17 19:35:26.292598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.880 [2024-10-17 19:35:26.292640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.880 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.292924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.292948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.293124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.293146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.293369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.293394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.293623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.293647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.293764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.293788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.293969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.294001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.294130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.294163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.294348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.294381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.294645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.294680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.294808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.294842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.295016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.295050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.295243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.295276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.295463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.295486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.295731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.295755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.295856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.295878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.296070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.296094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.296250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.296273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.296389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.296431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.296551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.296595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.296793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.296827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.297064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.297096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.297273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.297306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.297423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.297456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.297646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.297669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.297829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.297861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.298035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.298278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.298440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.298666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.298848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.298977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.299009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.299200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.299232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.299379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.299413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.299586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.299616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.299812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.299835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.300106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.300138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.300377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.300401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.881 [2024-10-17 19:35:26.300630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.881 [2024-10-17 19:35:26.300665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.881 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.300841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.300873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.301015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.301049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.301231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.301264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.301453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.301487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.301662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.301696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.301819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.301853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.302028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.302062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.302262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.302295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.302512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.302557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.302670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.302695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.302790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.302812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.303070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.303094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.303204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.303237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.303411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.303444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.303562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.303595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.303849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.303883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.304070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.304102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.304316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.304351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.304596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.304627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.304743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.304766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.304926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.304959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.305076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.305115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.305252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.305284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.305456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.305488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.305776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.305809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.306026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.306059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.306188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.306221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.306403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.306426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.306620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.306655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.306851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.306883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.307923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.882 [2024-10-17 19:35:26.307944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.882 qpair failed and we were unable to recover it. 00:28:02.882 [2024-10-17 19:35:26.308107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.308131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.308249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.308281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.308520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.308553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.308737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.308770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.308890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.308924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.309041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.309075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.309192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.309225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.309400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.309441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.309595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.309625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.309803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.309826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.310022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.310047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.310270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.310292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.310512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.310539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.310643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.310667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.310778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.310811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.311010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.311044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.311147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.311180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.311431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.311466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.311614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.311649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.311775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.311809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.312949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.312982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.313114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.313147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.313273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.313307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.313410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.313443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.313663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.313698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.883 [2024-10-17 19:35:26.313827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.883 [2024-10-17 19:35:26.313851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.883 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.314022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.314046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.314238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.314272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.314457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.314492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.314707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.314742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.314861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.314895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.315136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.315168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.315347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.315380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.315626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.315661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.315847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.315870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.316100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.316133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.316350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.316383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.316560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.316593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.316783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.316807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.316978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.317219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.317428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.317644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.317771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.317912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.317934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.318033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.318055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.318218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.318241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.318327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.318349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.318593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.318629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.318801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.318824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.319101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.319259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.319477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.319586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.319881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.319995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.320970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.320992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.321084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.321106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.884 qpair failed and we were unable to recover it. 00:28:02.884 [2024-10-17 19:35:26.321328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.884 [2024-10-17 19:35:26.321350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.321496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.321519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.321636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.321670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.321781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.321814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.322057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.322090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.322259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.322292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.322465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.322497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.322646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.322681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.322961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.322986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.323107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.323132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.323229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.323251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.323343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.323365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.323514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.323541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.323792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.323826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.324066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.324098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.324343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.324377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.324563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.324587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.324753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.324777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.324886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.324911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.325166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.325190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.325278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.325299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.325458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.325529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.325842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.325881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.326091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.326116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.326299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.326321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.326515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.326538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.326713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.326748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.326861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.326895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.327955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.327978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.328130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.328152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.885 [2024-10-17 19:35:26.328379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.885 [2024-10-17 19:35:26.328402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.885 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.328558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.328582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.328688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.328710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.328872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.328895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.329801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.329976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.330811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.330999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.331195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.331314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.331550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.331735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.331943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.331975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.332090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.332122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.332385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.332418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.332619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.332654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.332844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.332878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.333943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.333964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.334076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.334099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.334182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.334204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.334291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.334312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.334510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.886 [2024-10-17 19:35:26.334532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.886 qpair failed and we were unable to recover it. 00:28:02.886 [2024-10-17 19:35:26.334725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.334760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.334883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.334918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.335091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.335124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.335367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.335399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.335621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.335656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.335791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.335824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.336090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.336122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.336304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.336337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.336517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.336560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.336721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.336751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.336961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.336994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.337215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.337248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.337437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.337470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.337651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.337686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.337877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.337910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.338070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.338217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.338441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.338590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.338866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.338972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.339004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.339220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.339255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.339366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.339399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.339716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.339790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.340049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.340121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.340269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.340307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.340487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.340522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.340764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.340798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.340992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.341205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.341357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.341633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.341794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.341943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.341974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.342088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.342121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.342248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.342280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.342403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.342436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.342645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.342679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.887 [2024-10-17 19:35:26.342801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.887 [2024-10-17 19:35:26.342834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.887 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.342960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.342993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.343171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.343204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.343326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.343358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.343539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.343572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.343758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.343793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.343910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.343942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.344138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.344171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.344296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.344330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.344457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.344500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.344596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.344624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.344865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.344897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.345118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.345157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.345344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.345376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.345628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.345651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.345844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.345868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.345982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.346144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.346367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.346639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.346762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.346955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.346978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.347090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.347111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.347360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.347392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.347631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.347669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.347831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.347855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.347969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.347991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.348104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.348128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.348290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.348323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.348440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.348473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.888 [2024-10-17 19:35:26.348655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.888 [2024-10-17 19:35:26.348689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.888 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.348892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.348915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.349080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.349102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.349300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.349322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.349548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.349581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.349712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.349744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.349924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.349957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.350106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.350249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.350479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.350706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.350866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.350992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.351146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.351373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.351578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.351798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.351961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.351994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.352146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.352180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.352394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.352419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.352513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.352535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.352722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.352746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.352839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.352877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.353118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.353191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.353330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.353368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.353573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.353624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.353900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.353933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.354065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.354099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.354295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.354329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.354567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.354612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.354802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.354834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.355089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.355121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.355249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.355282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.355422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.355454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.355564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.889 [2024-10-17 19:35:26.355596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.889 qpair failed and we were unable to recover it. 00:28:02.889 [2024-10-17 19:35:26.355730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.355755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.355863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.355891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.356868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.356891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.357860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.357882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.358031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.358055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.358349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.358373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.358626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.358651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.358822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.358845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.358944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.358967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.359074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.359097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.359262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.359285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.359532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.359566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.359837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.359870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.359995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.360028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.360228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.360260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.360505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.360528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.360769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.360792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.361945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.361968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.362087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.362109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.890 [2024-10-17 19:35:26.362287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.890 [2024-10-17 19:35:26.362311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.890 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.362413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.362436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.362615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.362638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.362863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.362886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.362985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.363008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.363205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.363239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.363356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.363391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.363681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.363720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.363854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.363883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.364085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.364117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.364259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.364292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.364480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.364514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.364694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.364720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.364886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.364908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.365905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.365937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.366118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.366153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.366350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.366385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.366666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.366702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.366824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.366857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.366973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.367006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.367154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.367188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.367397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.367439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.367622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.367645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.367794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.367819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.368040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.368257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.368423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.368656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.368820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.368993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.369027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.369144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.369182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.891 [2024-10-17 19:35:26.369387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.891 [2024-10-17 19:35:26.369421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.891 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.369532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.369565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.369821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.369857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.370051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.370084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.370265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.370299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.370492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.370525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.370768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.370793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.370945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.370967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.371138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.371171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.371289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.371322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.371455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.371488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.371730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.371765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.372934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.372956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.373088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.373111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.373279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.373301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.373543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.373573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.373751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.373783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.373890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.373922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.374101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.374135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.374408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.374440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.374675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.374700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.374860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.374893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.375037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.375072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.375196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.375228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.375355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.375388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.375582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.375625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.375866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.375900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.376132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.376165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.376359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.376392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.892 [2024-10-17 19:35:26.376675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.892 [2024-10-17 19:35:26.376699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.892 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.376870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.376894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.377064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.377097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.377273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.377307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.377488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.377512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.377631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.377655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.377901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.377930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.378108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.378316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.378472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.378625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.378795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.378981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.379015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.379203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.379236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.379526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.379559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.379691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.379726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.379900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.379934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.380054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.380087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.380198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.380232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.380369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.380402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.380648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.380684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.380931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.380964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.381092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.381126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.381431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.381464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.381619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.381653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.381897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.381931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.382159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.382192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.893 [2024-10-17 19:35:26.382376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.893 [2024-10-17 19:35:26.382409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.893 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.382626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.382661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.382892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.382924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.383047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.383081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.383206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.383238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.383364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.383397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.383573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.383619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.383941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.384016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.384242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.384279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.384477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.384511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.384719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.384757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.384952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.384986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.385167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.385202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.385469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.385503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.385679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.385715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.385913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.385946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.386129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.386165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.386276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.386309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.386498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.386532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.386717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.386741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.386901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.386924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.387090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.387113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.387281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.387305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.387465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.387488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.387657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.387682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.387850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.387873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.388831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.388997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.389031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.389147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.389179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.389304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.389339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.389523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.389557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.389815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.389850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.389975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.894 [2024-10-17 19:35:26.390008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.894 qpair failed and we were unable to recover it. 00:28:02.894 [2024-10-17 19:35:26.390227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.390260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.390435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.390469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.390670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.390693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.390922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.390945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.391112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.391135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.391249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.391271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.391426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.391450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.391691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.391717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.391882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.391913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.392183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.392253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.392413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.392449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.392596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.392643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.392755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.392789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.392981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.393194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.393408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.393542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.393703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.393929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.393962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.394110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.394269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.394569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.394700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.394881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.394971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.395150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.395373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.395533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.395746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.395930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.395952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.396199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.396232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.396473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.396507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.396758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.396782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.396876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.396897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.397001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.397025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.397200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.895 [2024-10-17 19:35:26.397224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.895 qpair failed and we were unable to recover it. 00:28:02.895 [2024-10-17 19:35:26.397403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.397429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.397533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.397556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.397745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.397780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.397893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.397926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.398120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.398153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.398260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.398294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.398489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.398522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.398703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.398737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.398955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.398979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.399168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.399192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.399349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.399371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.399592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.399636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.399770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.399803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.399997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.400146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.400372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.400596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.400772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.400883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.400904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.401858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.401988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.402010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.402184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.402217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.402406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.402444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.402684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.402720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.402913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.402937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.403121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.403153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.403355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.403389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.403519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.403553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.403747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.403770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.403942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.403964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.404140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.896 [2024-10-17 19:35:26.404174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.896 qpair failed and we were unable to recover it. 00:28:02.896 [2024-10-17 19:35:26.404363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.404395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.404576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.404635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.404760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.404793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.405829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.405852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.406019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.406042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.406294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.406327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.406456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.406490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.406598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.406645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.406764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.406787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.407902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.407936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.408109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.408142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.408267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.408301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.408567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.408610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.408802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.408835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.408954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.408978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.409106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.409299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.409435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.409574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.409784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.409983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.410015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.410131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.410164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.410429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.410466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.410672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.410696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.410864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.410888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.411062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.411095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.411265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.897 [2024-10-17 19:35:26.411299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.897 qpair failed and we were unable to recover it. 00:28:02.897 [2024-10-17 19:35:26.411487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.411521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.411766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.411789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.411967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.411989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.412081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.412102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.412270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.412294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.412451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.412474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.412560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.412582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.412821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.412844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.413084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.413106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.413349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.413374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.413484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.413507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.413592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.413621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.413864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.413888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.414809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.414833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.415862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.415885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.416005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.416039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.416245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.416277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.416410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.416443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.416641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.416675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.416884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.416918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.417200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.417234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.417364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.417397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.417508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.417540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.417674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.898 [2024-10-17 19:35:26.417699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.898 qpair failed and we were unable to recover it. 00:28:02.898 [2024-10-17 19:35:26.417784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.417805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.417902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.417926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.418178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.418250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.418387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.418425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.418628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.418665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.418886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.418912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.419007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.419030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.419218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.419242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.419459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.419499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.419703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.419737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.419908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.419940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.420124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.420157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.420268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.420302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.420568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.420608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.420875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.420909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.421136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.421159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.421338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.421361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.421473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.421495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.421655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.421680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.421899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.421921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.422088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.422112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.422233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.422265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.422391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.422424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.422548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.422580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.422763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.422797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.423938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.423970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.424085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.424119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.424294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.424326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.424505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.424539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.424735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.899 [2024-10-17 19:35:26.424769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.899 qpair failed and we were unable to recover it. 00:28:02.899 [2024-10-17 19:35:26.424962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.424996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.425103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.425137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.425264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.425297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.425489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.425523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.425766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.425802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.425977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.426010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.426202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.426235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.426341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.426372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.426559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.426593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.426786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.426819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.427816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.427850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.428038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.428061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.428222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.428256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.428465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.428497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.428706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.428744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.428907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.428930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.429108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.429146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.429346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.429379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.429516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.429548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.429743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.429779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.429964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.429997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.430113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.430145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.430270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.430303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.430476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.430508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.430693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.430728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.430897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.430929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.431100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.431122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.431222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.431246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.431405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.900 [2024-10-17 19:35:26.431429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.900 qpair failed and we were unable to recover it. 00:28:02.900 [2024-10-17 19:35:26.431533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.431556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.431722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.431747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.431968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.431991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.432101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.432124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.432224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.432245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.432395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.432418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.432648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.432683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.432797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.432830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.433006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.433039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.433236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.433269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.433392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.433425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.433613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.433648] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.433881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.433915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.434023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.434057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.434185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.434217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.434403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.434437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.434723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.434758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.434932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.434965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.435157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.435191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.435420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.435451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.435692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.435728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.435970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.435993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.436192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.436214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.436382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.436406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.436555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.436577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.436694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.436720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.436901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.436925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.437073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.437095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.437283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.437322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.437513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.437545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.437738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.437773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.437963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.437988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.438237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.438261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.438442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.438466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.438664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.438698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.438913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.438946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.439074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.439108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.439245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.439278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.901 qpair failed and we were unable to recover it. 00:28:02.901 [2024-10-17 19:35:26.439519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.901 [2024-10-17 19:35:26.439552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.439748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.439782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.439902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.439935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.440118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.440142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.440256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.440290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.440476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.440508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.440646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.440680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.440856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.440879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.441047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.441079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.441202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.441234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.441439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.441472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.441718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.441752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.441952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.441975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.442154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.442176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.442325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.442348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.442594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.442625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.442728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.442752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.442915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.442953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.443088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.443121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.443307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.443340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.443533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.443566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.443812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.443886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.444064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.444133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.444267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.444303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.444545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.444579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.444810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.444845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.445120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.445151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.445423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.445457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.445654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.445678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.445920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.445944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.446131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.446154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.446318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.446342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.446520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.446553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.446778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.446812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.902 qpair failed and we were unable to recover it. 00:28:02.902 [2024-10-17 19:35:26.446940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.902 [2024-10-17 19:35:26.446972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.447086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.447119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.447239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.447273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.447470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.447504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.447758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.447792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.447942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.447982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.448201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.448349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.448556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.448692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.448812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.448999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.449034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.449234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.449267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.449456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.449491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.449682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.449718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.449958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.449991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.450255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.450288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.450524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.450557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.450847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.450873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.450984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.451090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.451227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.451491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.451661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.451848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.451887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.452080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.452272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.452399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.452585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.452723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.903 [2024-10-17 19:35:26.452965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.903 [2024-10-17 19:35:26.453000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.903 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.453196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.453229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.453351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.453384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.453588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.453632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.453888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.453931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.454039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.454062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.454228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.454262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.454383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.454417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.454620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.454656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.454898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.454932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.455142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.455310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.455469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.455766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.455911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.455993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.456232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.456357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.456542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.456761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.456905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.456928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.457093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.457131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.457320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.457353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.457658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.457731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.457944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.457979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.458111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.458145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.458318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.458343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.458565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.458588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.458686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.458709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.458889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.458922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.459174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.459207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.459339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.459372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.459621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.459655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.459790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.459823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.459998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.460031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.460293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.904 [2024-10-17 19:35:26.460326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.904 qpair failed and we were unable to recover it. 00:28:02.904 [2024-10-17 19:35:26.460576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.460638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.460739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.460763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.460853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.460874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.461062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.461095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.461361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.461394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.461585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.461630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.461776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.461799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.461912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.461936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.462100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.462123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.462210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.462232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.462456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.462489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.462671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.462705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.462893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.462926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.463105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.463137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.463265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.463297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.463407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.463442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.463625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.463659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.463780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.463812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.464051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.464084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.464268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.464300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.464440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.464473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.464594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.464646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.464752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.464776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.465061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.465094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.465276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.465308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.465444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.465477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.465669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.465712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.465912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.465946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.466076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.466108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.466223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.466256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.466372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.466406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.466649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.466685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.466881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.466914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.467034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.467068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.467264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.905 [2024-10-17 19:35:26.467296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.905 qpair failed and we were unable to recover it. 00:28:02.905 [2024-10-17 19:35:26.467479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.467512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.467703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.467739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.467929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.467962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.468153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.468186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.468387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.468421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.468537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.468570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.468703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.468743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.468891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.468915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.469022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.469045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.469223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.469248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.469436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.469477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.469627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.469661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.469834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.469867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.470071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.470094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.470212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.470235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.470507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.470549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.470687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.470721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.470906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.470939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.471140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.471177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.471305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.471337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.471518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.471551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.471827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.471862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.472920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.472956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.473071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.473103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.473221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.473255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.473433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.473466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.473708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.473742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.473882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.473916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.474162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.474195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.474325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.474358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.474543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.474575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.906 [2024-10-17 19:35:26.474760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.906 [2024-10-17 19:35:26.474803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.906 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.474968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.474992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.475088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.475131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.475347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.475380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.475490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.475523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.475731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.475756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.475906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.475929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.476096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.476118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.476204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.476227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.476381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.476408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.476651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.476686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.476928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.476960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.477167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.477190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.477348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.477371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.477534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.477558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.477671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.477694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.477939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.477963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.478948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.478973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.479971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.479996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.480087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.480110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.480262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.480286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.480387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.480410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.480638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.480672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.480908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.480941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.481051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.481084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.481267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.481299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.481547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.481585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.907 qpair failed and we were unable to recover it. 00:28:02.907 [2024-10-17 19:35:26.481878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.907 [2024-10-17 19:35:26.481910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.482171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.482207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.482329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.482361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.482539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.482572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.482696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.482739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.482852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.482876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.483049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.483081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.483253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.483286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.483467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.483500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.483684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.483719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.483825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.483869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.484039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.484063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.484314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.484347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.484544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.484578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.484833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.484866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.484996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.485029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.485166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.485200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.485445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.485469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.485632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.485656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.485873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.485896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.485999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.486122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.486306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.486520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.486712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.486833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.486856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.908 [2024-10-17 19:35:26.487019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.908 [2024-10-17 19:35:26.487043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.908 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.487204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.487236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.487433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.487465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.487652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.487696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.487784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.487806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.488898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.488998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.489020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.489176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.489200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.489289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.489329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.489533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.489572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.489837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.489872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.489987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.490205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.490422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.490615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.490753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.490900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.490934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.491222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.491253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.491380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.491414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.491585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.491636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.491767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.491800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.492884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.492988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.493011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.493181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.493204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.493360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.493394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.493637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.493671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.493856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.493889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.494098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.909 [2024-10-17 19:35:26.494130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.909 qpair failed and we were unable to recover it. 00:28:02.909 [2024-10-17 19:35:26.494244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.494276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.494406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.494439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.494575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.494629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.494816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.494856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.495080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.495272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.495384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.495647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.495830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.495986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.496173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.496327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.496498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.496716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.496961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.496993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.497157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.497182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.497390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.497412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.497565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.497589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.497840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.497875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.498004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.498037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.498223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.498256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.498389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.498423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.498616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.498653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.498836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.498859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.499027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.499050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.499151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.499195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.499317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.499349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.499529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.499562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.499811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.499844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.500033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.500066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.500330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.500364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.500486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.500519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.500646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.500682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.500797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.500831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.501033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.501067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.501170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.501193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.910 [2024-10-17 19:35:26.501420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.910 [2024-10-17 19:35:26.501454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.910 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.501642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.501679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.501812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.501844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.502840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.502869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.503069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.503194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.503449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.503611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.503784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.503986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.504019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.504253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.504278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.504387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.504419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.504591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.504637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.504821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.504854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.505041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.505063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.505179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.505212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.505393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.505426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.505621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.505657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.505829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.505862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.506040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.506072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.506289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.506323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.506491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.506523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.506647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.506682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.506801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.506834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.507094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.507137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.507326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.507349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.507452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.507476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.507577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.507598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.507778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.507802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.508022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.508044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.508224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.508263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.508460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.508492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.508685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.508721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.911 [2024-10-17 19:35:26.508844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.911 [2024-10-17 19:35:26.508876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.911 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.509115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.509148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.509267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.509290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.509391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.509415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.509632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.509656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.509838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.509861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.510949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.510973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.511232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.511254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.511345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.511366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.511470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.511493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.511654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.511677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.511850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.511882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.512011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.512044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.512233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.512266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.512454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.512487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.512752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.512786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.512923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.512954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.513127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.513161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.513285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.513319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.513504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.513537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.513727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.513761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.513943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.513967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.514943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.514967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.515072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.515096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.515192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.515215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.515320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.515344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.515429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.515450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.912 qpair failed and we were unable to recover it. 00:28:02.912 [2024-10-17 19:35:26.515610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.912 [2024-10-17 19:35:26.515637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.515833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.515856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.515963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.515987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.516139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.516163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.516281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.516315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.516491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.516525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.516638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.516673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.516869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.516912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.517016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.517039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.517153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.517175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.517281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.517551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.517855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.517925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.518060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.518096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.518323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.518357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.518544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.518577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.518719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.518754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.518944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.518976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.519241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.519278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.519482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.519513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.519638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.519672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.519790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.519822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.519943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.519975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.520101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.520140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.520303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.520327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.520438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.520470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.520658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.520694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.520863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.520896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.521075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.521102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.521260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.913 [2024-10-17 19:35:26.521292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.913 qpair failed and we were unable to recover it. 00:28:02.913 [2024-10-17 19:35:26.521530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.521562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.521749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.521783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.521964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.521996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.522236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.522273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.522394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.522418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.522592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.522621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.522711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.522733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.522970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.523004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.523291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.523324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.523515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.523548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.523681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.523715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.523955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.523989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.524184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.524216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.524392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.524424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.524544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.524576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.524831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.524866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.525964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.525987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.526084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.526108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.526268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.526300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.526445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.526477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.526726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.526760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.526891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.526915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.527079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.527102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.527348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.527381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.527624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.527659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.527857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.527889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.528138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.528172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.528361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.528385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.528506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.528538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.528720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.528756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.528848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.528869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.529040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.529065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.529212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.529234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.529453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.529486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.914 [2024-10-17 19:35:26.529672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.914 [2024-10-17 19:35:26.529715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.914 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.529836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.529871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.529978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.530011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.530121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.530146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.530369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.530402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.530651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.530685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.530807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.530839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.531930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.531953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.532037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.532060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.532258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.532281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.532396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.532420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.532536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.532560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.532847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.532920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.533134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.533171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.533303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.533337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.533530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.533563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.533700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.533735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.533880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.533913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.534024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.534057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.534238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.534271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.534448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.534482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.534595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.534662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.534826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.534865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.535061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.535094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.535343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.535365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.535519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.535543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.535786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.535810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.535975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.535997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.536216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.536251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.536447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.536480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.536669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.536704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.536877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.536911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.537175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.915 [2024-10-17 19:35:26.537210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.915 qpair failed and we were unable to recover it. 00:28:02.915 [2024-10-17 19:35:26.537482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.537515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.537730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.537764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.537986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.538019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.538269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.538302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.538495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.538528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.538707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.538740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.538927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.538961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.539097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.539129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.539308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.539332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.539558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.539591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.539738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.539770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.540018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.540041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.540140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.540162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.540422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.540463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.540721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.540755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.540952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.540985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.541256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.541282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.541440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.541462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.541652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.541686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.541813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.541847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.541975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.542007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.542202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.542235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.542433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.542466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.542720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.542755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.542927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.542960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.543206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.543240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.543360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.543383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.543575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.543616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.543811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.543844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.544033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.544066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.544318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.544356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.544523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.544594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.544823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.544860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.544980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.545014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.916 qpair failed and we were unable to recover it. 00:28:02.916 [2024-10-17 19:35:26.545205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.916 [2024-10-17 19:35:26.545240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.545436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.545469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.545585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.545626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.545811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.545845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.546039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.546072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.546263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.546296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.546542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.546576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.546828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.546862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.547097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.547121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.547341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.547372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.547599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.547641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.547793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.547816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.547916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.547938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.548136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.548160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.548310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.548344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.548617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.548651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.548848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.548883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.549122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.549155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.549291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.549324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.549529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.549562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.549714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.549748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.549949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.549982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.550200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.550233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.550523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.550560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.550775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.550810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.551051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.551093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.551335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.551368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.551586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.551630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.551824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.551858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.552097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.552131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.552399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.552431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.552701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.552737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.552866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.552899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.917 [2024-10-17 19:35:26.553170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.917 [2024-10-17 19:35:26.553202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.917 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.553445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.553478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.553597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.553652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.553788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.553826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.554069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.554101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.554226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.554259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.554389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.554422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.554685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.554720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.554860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.554893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.555134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.555165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.555341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.555375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.555663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.555719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.555964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.555996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.556239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.556273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.556517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.556551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.556844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.556877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.557007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.557041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.557310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.557347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.557487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.557521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.557699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.557733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.558035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.558263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.558414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.558629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.558813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.558994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.559027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.559294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.559328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.559539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.559571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.559825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.559858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.560062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.918 [2024-10-17 19:35:26.560094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.918 qpair failed and we were unable to recover it. 00:28:02.918 [2024-10-17 19:35:26.560266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.560289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.560420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.560443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.560707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.560749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.560953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.560987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.561194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.561226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.561359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.561383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.561482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.561505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.561748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.561773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.562069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.562103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.562324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.562357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.562630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.562665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.562841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.562875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.563059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.563093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.563225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.563249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.563504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.563541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.563725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.563760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.563964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.563996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.564228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.564261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.564504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.564539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.564838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.564872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.565144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.565168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.565455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.565479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.565564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.565586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.565704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.565727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.565894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.565917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.566138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.566162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.566381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.566406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.566638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.566674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.566871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.566904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.567168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.567202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.567492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.567526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.567796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.567832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.568052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.568263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.568489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.568661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.568867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.568989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.569022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.569283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.569316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.569559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.919 [2024-10-17 19:35:26.569583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.919 qpair failed and we were unable to recover it. 00:28:02.919 [2024-10-17 19:35:26.569686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.569709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.569952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.569991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.570178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.570211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.570499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.570531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.570748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.570783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.570929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.570963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.571143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.571177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.571442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.571467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.571637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.571672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.571811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.571845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.572111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.572144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.572386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.572418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.572710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.572745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.572925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.572957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.573241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.573265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.573473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.573497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.573666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.573690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.573938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.573970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.574219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.574243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.574400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.574423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.574662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.574685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.574800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.574823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.574995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.575018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.575273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.575298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.575489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.575513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.575697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.575722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.575896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.575919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.576078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.576102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.576270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.576296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.576523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.576548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.576810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.576836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.577081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.577103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.577272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.577295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.577389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.577412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.577678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.577702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.577921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.577953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.578197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.578230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.578406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.578438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.578637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.920 [2024-10-17 19:35:26.578671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.920 qpair failed and we were unable to recover it. 00:28:02.920 [2024-10-17 19:35:26.578856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.578889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.578997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.579030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.579301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.579335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.579610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.579646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.579902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.579935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.580225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.580258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.580498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.580531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.580710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.580745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.580965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.580998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.581207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.581230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.581410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.581433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.581638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.581662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.581879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.581902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.582165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.582188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.582441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.582465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.582680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.582705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.582951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.582974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.583222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.583246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.583413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.583437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.583682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.583718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.584045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.584088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.584325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.584348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.584520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.584544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.584655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.584679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.584974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.585006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.585246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.585279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.585421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.585444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.585615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.585639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.585794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.585816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.585986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.586009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.586231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.586259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.586466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.586489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.586711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.586734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.586962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.586985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.587157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.587181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.587428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.587461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.587649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.587684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.587886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.587920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.588118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.588141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.588364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.588389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.588496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.921 [2024-10-17 19:35:26.588541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.921 qpair failed and we were unable to recover it. 00:28:02.921 [2024-10-17 19:35:26.588821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.588856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.589119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.589153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.589342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.589374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.589640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.589676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.589892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.589925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.590069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.590111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.590281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.590305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.590549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.590572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.590764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.590790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.590988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.591022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.591212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.591245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.591362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.591395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.591662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.591697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.591986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.592021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.592265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.592297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.592562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.592595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.592874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.592915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.593184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.593218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.593466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.593499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.593743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.593779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.594045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.594079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.594354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.594378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.594631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.594655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.594918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.594942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.595189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.595211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.595396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.595430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.595675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.595710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.595835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.595867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.596066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.596091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.596362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.596394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.596673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.596709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.596906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.596938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.597154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.597179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.597334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.597359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.597617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.597642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.597894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.922 [2024-10-17 19:35:26.597928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.922 qpair failed and we were unable to recover it. 00:28:02.922 [2024-10-17 19:35:26.598205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.598239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.598452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.598485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.598675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.598710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.598846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.598879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.599079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.599113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.599242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.599276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.599415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.599448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.599727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.599751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.599926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.599950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.600128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.600161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.600344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.600376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.600567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.600626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.600898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.600933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.601138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.601171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.601464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.601496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.601697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.601733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.601994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.602027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.602160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.602194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.602371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.602404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.602648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.602682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.602860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.602893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.603158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.603197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.603389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.603412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.603558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.603612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.603809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.603842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.604036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.604069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.604314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.604347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.604553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.604586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.604784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.604819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.923 qpair failed and we were unable to recover it. 00:28:02.923 [2024-10-17 19:35:26.605072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.923 [2024-10-17 19:35:26.605104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.605320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.605354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.605461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.605493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.605718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.605752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.605891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.605923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.606115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.606149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.606454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.606479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.606718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.606744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.606915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.606949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.607191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.607225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.607536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.607569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.607778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.607812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.607927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.607961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.608257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.608291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.608432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.608465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.608736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.608770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.609014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.609047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.609222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.609256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.609524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.609548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.609795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.609820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.609929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.609952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.610152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.610177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.610402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.610426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.610593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.610632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.610800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.610824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.611009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.611033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.611231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.611263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.611375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.611410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.611593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.611644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.611871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.611904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.612098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.612131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.612403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.612435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.612693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.612718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.612970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.612995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.613246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.613286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.613544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.613577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.613782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.613816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.614010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.614033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.614309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.614341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.614525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.614559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.614815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.924 [2024-10-17 19:35:26.614850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.924 qpair failed and we were unable to recover it. 00:28:02.924 [2024-10-17 19:35:26.615036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.615070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.615256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.615288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.615559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.615592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.615739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.615773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.615991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.616025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.616232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.616267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.616496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.616529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.620758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.620796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.621000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.621029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.621218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.621250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.621498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.621532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.621732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.621768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.621921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.621953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.622143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.622176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.622379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.622412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.622531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.622566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.622843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.622878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.623168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.623201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.623386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.623420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.623648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.623691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.623822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.623856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.624032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.624069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.624197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.624231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.624443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.624477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.624616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.624651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.624830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.624864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.625073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.625105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.625391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.625425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.625569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.625616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.625834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.625868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.626059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.626092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.626291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.626325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.626599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.626663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.626859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.626895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.627210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.627242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.627427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.627460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.627733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:02.925 [2024-10-17 19:35:26.627771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:02.925 qpair failed and we were unable to recover it. 00:28:02.925 [2024-10-17 19:35:26.627961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.627995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.628233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.628268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.628470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.628504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.628771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.628806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.628953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.628987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.629112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.629147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.629420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.629455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.629712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.629746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.630028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.630063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.630353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.630387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.630657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.630693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.630981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.631015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.631201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.631234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.631487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.631521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.631673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.631709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.631903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.631936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.632180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.632213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.632490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.632522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.632672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.632708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.632983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.633017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.633282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.633316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.633539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.633573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.633851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.633887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.634084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.634122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.634401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.634435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.634634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.634669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.634930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.634965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.635157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.635192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.635327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.635359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.635587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.635636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.635897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.635930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.636137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.636170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.636435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.636470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.636724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.636761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.637008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.637042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.637228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.637262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.637475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.637508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.637778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.637815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.637939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.637973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.207 [2024-10-17 19:35:26.638198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.207 [2024-10-17 19:35:26.638232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.207 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.638434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.638468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.638764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.638798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.639013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.639048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.639186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.639221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.639418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.639450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.639646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.639681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.639879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.639913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.640118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.640153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.640330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.640364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.640542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.640576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.640839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.640882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.641008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.641042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.641292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.641325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.641574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.641619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.641916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.641950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.642156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.642189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.642412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.642446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.642694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.642730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.642980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.643016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.643220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.643254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.643469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.643502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.643715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.643751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.644002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.644036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.644258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.644292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.644571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.644617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.644886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.644919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.645135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.645169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.645359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.645392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.645666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.645701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.645960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.645993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.646192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.646226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.646499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.646532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.646745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.646780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.647026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.647059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.647258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.647292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.647572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.647647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.647857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.647890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.648144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.648179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.648370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.648403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.208 qpair failed and we were unable to recover it. 00:28:03.208 [2024-10-17 19:35:26.648655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.208 [2024-10-17 19:35:26.648692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.648908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.648941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.649215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.649248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.649391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.649425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.649677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.649712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.649976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.650009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.650177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.650211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.650461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.650494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.650765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.650801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.651001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.651035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.651169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.651203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.651401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.651434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.651687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.651735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.651933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.651968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.652227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.652261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.652439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.652474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.652750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.652786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.652945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.652978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.653242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.653275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.653527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.653561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.653776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.653812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.654042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.654076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.654309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.654344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.654532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.654567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.654725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.654759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.655062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.655098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.655296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.655330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.655567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.655614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.655816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.655850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.656042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.656076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.656291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.656326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.656584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.656633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.656912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.656946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.657135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.657170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.657350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.657385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.657612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.657650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.657904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.657937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.658163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.658197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.658471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.658505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.658730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.658771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.659049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.209 [2024-10-17 19:35:26.659083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.209 qpair failed and we were unable to recover it. 00:28:03.209 [2024-10-17 19:35:26.659360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.659394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.659517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.659551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.659875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.659912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.660192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.660225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.660443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.660476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.660730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.660767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.660958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.660994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.661263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.661297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.661493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.661526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.661788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.661824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.661953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.661987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.662238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.662274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.662557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.662594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.662740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.662774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.663024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.663058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.663365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.663399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.663592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.663640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.663910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.663944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.664226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.664262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.664506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.664540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.664853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.664890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.665160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.665195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.665483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.665518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.665793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.665829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.666037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.666070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.666322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.666368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.666659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.666696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.666985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.667018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.667222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.667256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.667559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.667595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.667803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.667857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.667979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.668014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.668199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.668234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.668497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.668532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.668681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.668717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.668997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.669031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.669332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.669367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.669582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.669629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.669775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.669811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.670015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.210 [2024-10-17 19:35:26.670055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.210 qpair failed and we were unable to recover it. 00:28:03.210 [2024-10-17 19:35:26.670214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.670247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.670524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.670559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.670764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.670800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.671006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.671040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.671317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.671351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.671620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.671655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.671846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.671879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.672149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.672186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.672443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.672478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.672780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.672815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.673120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.673156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.673414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.673450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.673700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.673736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.674046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.674080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.674377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.674411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.674641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.674679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.674937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.674971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.675198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.675234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.675499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.675534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.675697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.675732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.675932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.675966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.676100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.676136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.676341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.676375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.676700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.676738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.676944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.676979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.677312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.677347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.677482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.677526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.677805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.677842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.677974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.678010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.678222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.678258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.678482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.678517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.678770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.678807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.679110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.679144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.679429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.211 [2024-10-17 19:35:26.679464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.211 qpair failed and we were unable to recover it. 00:28:03.211 [2024-10-17 19:35:26.679628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.679665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.679867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.679901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.680177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.680211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.680501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.680535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.680755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.680791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.681071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.681107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.681388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.681425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.681655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.681693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.681948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.681983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.682187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.682221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.682440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.682474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.682732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.682768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.683023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.683059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.683261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.683295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.683551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.683585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.683883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.683918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.684121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.684157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.684382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.684416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.684690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.684727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.684968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.685001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.685316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.685350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.685582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.685629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.685903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.685937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.686237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.686271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.686537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.686571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.686843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.686878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.687170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.687203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.687390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.687424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.687705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.687740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.687929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.687963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.688223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.688257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.688514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.688548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.688856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.688891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.689152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.689192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.689460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.689494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.689712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.689748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.689947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.689980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.690244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.690278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.690569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.690613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.212 [2024-10-17 19:35:26.690816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.212 [2024-10-17 19:35:26.690850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.212 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.691035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.691069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.691275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.691308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.691584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.691631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.691894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.691927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.692126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.692161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.692430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.692464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.692730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.692765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.693000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.693034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.693338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.693373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.693627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.693664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.693945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.693979] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.694232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.694266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.694518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.694552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.694808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.694843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.694990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.695024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.695221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.695256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.695467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.695501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.695778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.695813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.695996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.696030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.696322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.696355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.696624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.696660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.696933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.696967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.697174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.697208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.697471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.697505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.697698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.697735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.697931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.697965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.698191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.698226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.698419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.698454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.698683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.698718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.698918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.698951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.699176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.699210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.699511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.699546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.699872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.699908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.700102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.700136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.700400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.700434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.700689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.700725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.700869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.700903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.701156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.701190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.701396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.701430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.701727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.213 [2024-10-17 19:35:26.701763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.213 qpair failed and we were unable to recover it. 00:28:03.213 [2024-10-17 19:35:26.702023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.702057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.702256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.702290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.702475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.702508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.702791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.702826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.703081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.703114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.703391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.703425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.703717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.703753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.703956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.703989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.704144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.704178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.704405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.704439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.704641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.704677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.704976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.705010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.705208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.705242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.705527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.705561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.705841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.705877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.706025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.706059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.706186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.706221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.706510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.706544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.706803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.706838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.707045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.707079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.707356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.707391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.707661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.707708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.707907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.707940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.708248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.708282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.708557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.708591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.708819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.708855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.709058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.709091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.709371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.709404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.709664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.709700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.710006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.710039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.710316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.710350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.710637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.710672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.710946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.710981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.711184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.711217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.711414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.711448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.711709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.711746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.711989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.712022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.712277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.712312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.712437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.712471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.214 [2024-10-17 19:35:26.712621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.214 [2024-10-17 19:35:26.712657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.214 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.712855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.712889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.713115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.713149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.713434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.713469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.713662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.713699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.713896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.713929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.714122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.714156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.714435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.714469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.714619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.714654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.714929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.714963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.715264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.715299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.715562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.715596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.715895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.715930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.716187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.716220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.716351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.716385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.716667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.716704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.717010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.717045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.717168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.717202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.717461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.717495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.717699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.717734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.718014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.718047] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.718269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.718304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.718621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.718656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.718961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.718996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.719177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.719211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.719433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.719467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.719749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.719785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.719987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.720021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.720224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.720257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.720442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.720476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.720730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.215 [2024-10-17 19:35:26.720766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.215 qpair failed and we were unable to recover it. 00:28:03.215 [2024-10-17 19:35:26.721075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.721108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.721393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.721428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.721651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.721688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.721969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.722002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.722259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.722293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.722485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.722519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.722802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.722838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.723041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.723075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.723331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.723365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.723561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.723594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.723878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.723913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.724112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.724146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.724350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.724384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.724580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.724628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.724887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.724922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.725202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.725235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.725514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.725549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.725788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.725823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.726090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.726125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.726415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.726454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.726642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.726677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.726886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.726920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.727105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.727138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.727432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.727466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.727738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.727774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.727967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.728002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.728187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.728222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.728499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.728533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.728798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.728834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.729051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.729085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.729364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.729398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.729701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.729737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.729997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.730030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.730239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.730275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.730537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.730572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.730889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.730924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.731181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.731216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.731495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.731530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.731775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.731811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.732120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.732155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.216 qpair failed and we were unable to recover it. 00:28:03.216 [2024-10-17 19:35:26.732418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.216 [2024-10-17 19:35:26.732451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.732724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.732760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.732966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.733000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.733251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.733285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.733561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.733595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.733902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.733937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.734193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.734226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.734441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.734475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.734755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.734792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.734995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.735030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.735225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.735259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.735544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.735578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.735850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.735884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.736174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.736208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.736412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.736446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.736675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.736710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.737011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.737044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.737305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.737341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.737645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.737680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.737922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.737956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.738279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.738320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.738612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.738646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.738938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.738972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.739169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.739203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.739405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.739440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.739718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.739755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.740011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.740046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.740333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.740366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.740583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.740629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.740908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.740942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.741132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.741166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.741447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.741481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.741740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.741776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.741996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.742030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.742354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.742390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.742622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.742657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.742844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.742878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.743145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.743179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.743432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.743465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.743661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.743696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.743919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.217 [2024-10-17 19:35:26.743953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.217 qpair failed and we were unable to recover it. 00:28:03.217 [2024-10-17 19:35:26.744233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.744268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.744499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.744533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.744812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.744847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.745038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.745072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.745331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.745365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.745667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.745702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.745964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.746002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.746182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.746217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.746413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.746448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.746729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.746766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.747023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.747057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.747344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.747377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.747697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.747733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.748004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.748038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.748249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.748283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.748548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.748581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.748874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.748910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.749203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.749237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.749506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.749541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.749834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.749870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.750001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.750036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.750314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.750348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.750619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.750656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.750867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.750901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.751175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.751209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.751494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.751529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.751727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.751762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.752064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.752097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.752295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.752330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.752548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.752583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.752846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.752880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.753085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.753119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.753337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.753371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.753651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.753688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.753969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.754003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.754259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.754293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.754499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.754533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.754721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.754756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.755012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.755046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.755244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.218 [2024-10-17 19:35:26.755280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.218 qpair failed and we were unable to recover it. 00:28:03.218 [2024-10-17 19:35:26.755468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.755502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.755786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.755823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.756080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.756114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.756420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.756454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.756738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.756773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.757054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.757089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.757365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.757398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.757707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.757748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.758027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.758061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.758292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.758327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.758532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.758565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.758800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.758835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.759031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.759065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.759316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.759349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.759625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.759661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.759921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.759956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.760232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.760267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.760547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.760581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.760868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.760902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.761196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.761231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.761502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.761535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.761775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.761811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.762009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.762043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.762345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.762379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.762638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.762674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.762901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.762935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.763156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.763191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.763479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.763513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.763785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.763822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.764080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.764114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.764302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.764336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.764561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.764595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.764793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.764827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.765053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.765087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.765310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.765355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.765568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.765612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.765824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.765857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.766062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.766096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.766351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.766385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.766676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.766714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.219 [2024-10-17 19:35:26.767012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.219 [2024-10-17 19:35:26.767046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.219 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.767233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.767268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.767531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.767565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.767847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.767882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.768180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.768215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.768420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.768454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.768743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.768780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.769052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.769085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.769290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.769325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.769611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.769646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.769763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.769797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.769983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.770017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.770285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.770320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.770573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.770621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.770941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.770975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.771094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.771129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.771428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.771462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.771614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.771650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.771872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.771909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.772116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.772149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.772373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.772408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.772638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.772676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.772951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.772987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.773190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.773224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.773416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.773450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.773635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.773672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.773888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.773924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.774069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.774105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.774303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.774337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.774635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.774671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.774877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.774911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.775113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.775147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.775345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.775379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.775563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.775598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.775814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.775849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.220 qpair failed and we were unable to recover it. 00:28:03.220 [2024-10-17 19:35:26.776124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.220 [2024-10-17 19:35:26.776165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.776370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.776404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.776632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.776668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.776868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.776901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.777109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.777144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.777343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.777376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.777654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.777690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.777922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.777956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.778174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.778208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.778489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.778524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.778790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.778827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.779040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.779074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.779272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.779307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.779439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.779473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.779716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.779752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.780007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.780042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.780245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.780280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.780559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.780593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.780860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.780894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.781193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.781227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.781459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.781493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.781628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.781663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.781942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.781977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.782185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.782219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.782415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.782449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.782730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.782765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.782964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.782998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.783122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.783166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.783423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.783457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.783661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.783696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.783966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.784000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.784272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.784307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.784596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.784644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.784853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.784887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.785070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.785104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.785405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.785440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.785760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.785796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.786055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.786089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.786393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.786427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.786694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.786730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.786937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.221 [2024-10-17 19:35:26.786971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.221 qpair failed and we were unable to recover it. 00:28:03.221 [2024-10-17 19:35:26.787173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.787207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.787481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.787516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.787719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.787754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.788009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.788042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.788174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.788207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.788426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.788462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.788653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.788687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.788829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.788863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.789116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.789150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.789429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.789463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.789591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.789651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.789788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.789822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.790017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.790052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.790307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.790341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.790550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.790585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.790873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.790907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.791136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.791170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.791422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.791457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.791763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.791799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.792089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.792123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.792421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.792456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.792735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.792771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.792953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.792987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.793143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.793177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.793475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.793509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.793635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.793671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.793877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.793911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.794164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.794204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.794400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.794434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.794565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.794611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.794817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.794851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.794994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.795027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.795222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.795257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.795441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.795475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.795755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.795791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.795930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.795964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.796217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.796251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.796453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.796490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.796766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.796801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.797066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.797102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.222 [2024-10-17 19:35:26.797308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.222 [2024-10-17 19:35:26.797343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.222 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.797557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.797591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.797818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.797853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.798111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.798145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.798347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.798381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.798565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.798612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.798818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.798852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.799047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.799080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.799377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.799410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.799625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.799660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.799949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.799983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.800262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.800296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.800556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.800591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.800828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.800863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.801056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.801091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.801319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.801353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.801540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.801576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.801738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.801771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.802041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.802076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.802326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.802360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.802548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.802583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.802835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.802871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.803011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.803044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.803207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.803243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.803492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.803526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.803805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.803843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.804047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.804081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.804273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.804307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.804512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.804548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.804801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.804839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.804995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.805028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.805211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.805246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.805473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.805508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.805713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.805749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.805959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.805993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.806118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.806152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.806360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.806397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.806585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.806629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.806840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.806874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.807024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.807059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.807285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.223 [2024-10-17 19:35:26.807321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.223 qpair failed and we were unable to recover it. 00:28:03.223 [2024-10-17 19:35:26.807614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.807652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.807791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.807826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.808036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.808070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.808281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.808314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.808530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.808565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.808841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.808877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.809008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.809041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.809265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.809298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.809559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.809594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.809756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.809792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.809989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.810023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.810148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.810183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.810473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.810508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.810729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.810765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.810922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.810966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.811119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.811154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.811393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.811427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.811684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.811720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.811991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.812024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.812167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.812202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.812407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.812441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.812726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.812762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.812898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.812932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.813119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.813154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.813379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.813413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.813597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.813644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.813782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.813815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.814009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.814043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.814165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.814200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.814421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.814455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.814644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.814682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.814876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.814910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.815046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.815081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.815214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.815248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.815374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.815408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.815695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.224 [2024-10-17 19:35:26.815730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.224 qpair failed and we were unable to recover it. 00:28:03.224 [2024-10-17 19:35:26.815940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.815975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.816237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.816272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.816504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.816537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.816737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.816773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.816958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.816991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.817177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.817212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.817412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.817449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.817592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.817640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.817855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.817889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.818118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.818154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.818289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.818325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.818643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.818681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.818894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.818929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.819143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.819178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.819383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.819418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.819616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.819654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.819803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.819837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.820028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.820062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.820434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.820469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.820622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.820664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.820811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.820845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.821078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.821113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.821427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.821462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.821652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.821688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.821876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.821910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.822060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.822095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.822293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.822328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.822595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.822647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.822790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.822824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.822961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.822996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.823259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.823293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.824961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.825023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.825273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.825309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.825619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.825656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.825934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.825969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.826173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.826207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.826414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.826448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.826753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.826790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.827048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.827082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.827299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.827333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.827446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.225 [2024-10-17 19:35:26.827481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.225 qpair failed and we were unable to recover it. 00:28:03.225 [2024-10-17 19:35:26.827776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.827812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.828014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.828048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.828255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.828288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.828446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.828480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.828626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.828662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.828857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.828897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.829030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.829065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.829193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.829228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.829438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.829472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.829612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.829649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.829909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.829943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.830096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.830131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.830352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.830386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.830592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.830642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.830930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.830966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.831191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.831225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.831357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.831392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.831533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.831567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.831731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.831765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.831962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.831996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.832202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.832237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.832373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.832407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.832617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.832652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.832915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.832950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.833233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.833266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.833460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.833493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.833627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.833663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.833892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.833930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.834072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.834107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.834305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.834338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.834465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.834499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.834645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.834681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.834942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.834977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.835307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.835341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.835487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.835522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.835670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.835706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.835974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.836007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.836140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.836176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.836364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.836399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.836539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.836580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.226 qpair failed and we were unable to recover it. 00:28:03.226 [2024-10-17 19:35:26.836651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb56be0 (9): Bad file descriptor 00:28:03.226 [2024-10-17 19:35:26.837090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.226 [2024-10-17 19:35:26.837169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.837325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.837363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.837557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.837592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.837818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.837859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.838076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.838110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.838335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.838369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.838591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.838644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.838933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.838967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.839169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.839202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.839480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.839514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.839722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.839758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.839961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.839995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.840116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.840150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.840445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.840480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.840746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.840782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.840914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.840949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.841131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.841165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.841300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.841334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.841464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.841500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.841640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.841675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.841882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.841916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.842154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.842188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.842382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.842416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.842545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.842579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.842870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.842904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.843086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.843120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.843306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.843340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.843530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.843563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.843791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.843828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.844042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.844075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.844351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.844385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.844565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.844611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.844819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.844860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.845065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.845098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.845351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.845386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.845644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.845679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.845816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.845849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.846055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.846088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.846310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.846343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.846608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.846643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.227 [2024-10-17 19:35:26.846909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.227 [2024-10-17 19:35:26.846945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.227 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.847139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.847173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.847305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.847338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.847559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.847594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.847806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.847840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.848095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.848128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.848410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.848445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.848751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.848785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.848980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.849014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.849295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.849329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.849546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.849579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.849891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.849925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.850056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.850089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.850285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.850329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.850462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.850496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.850779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.850814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.850961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.850994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.851180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.851214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.851467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.851501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.851632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.851667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.851809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.851843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.852046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.852079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.852274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.852308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.852488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.852522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.852718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.852753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.852884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.852917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.853123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.853157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.853340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.853375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.853641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.853675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.853978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.854012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.854218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.854252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.854457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.854490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.854685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.854726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.854984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.855018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.855269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.855303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.855433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.855467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.228 qpair failed and we were unable to recover it. 00:28:03.228 [2024-10-17 19:35:26.855608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.228 [2024-10-17 19:35:26.855644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.855865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.855898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.856088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.856122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.856403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.856438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.856589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.856632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.856834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.856867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.857013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.857048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.857254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.857287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.857442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.857476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.857615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.857649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.857880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.857914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.858103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.858137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.858398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.858433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.858639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.858673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.858879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.858913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.859054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.859087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.859339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.859373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.859581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.859623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.859760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.859794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.859930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.859963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.860169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.860203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.860343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.860378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.860525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.860558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.860772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.860807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.861021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.861055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.861253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.861287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.861491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.861526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.861668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.861722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.861904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.861938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.862084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.862116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.862301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.862335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.862543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.862576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.862865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.862900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.863024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.863057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.863361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.863395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.863551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.863584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.863849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.863889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.864009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.864042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.864317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.864351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.864532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.864565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.229 [2024-10-17 19:35:26.864826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.229 [2024-10-17 19:35:26.864860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.229 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.865060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.865094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.865300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.865333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.865454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.865487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.865674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.865709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.865913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.865947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.866198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.866232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.866419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.866452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.866583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.866645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.866830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.866863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.867012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.867045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.867240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.867274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.867459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.867492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.867695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.867746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.867950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.867989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.868118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.868152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.868291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.868323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.868541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.868574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.868714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.868747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.868932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.868966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.869269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.869303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.869501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.869535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.869745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.869779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.869980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.870015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.870229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.870262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.870444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.870477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.870690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.870727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.870923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.870956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.871225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.871261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.871484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.871518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.871751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.871787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.872009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.872042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.872227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.872262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.872540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.872574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.872716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.872751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.872877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.872908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.873118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.873158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.873430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.873466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.873729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.873765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.874043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.874076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.874308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.230 [2024-10-17 19:35:26.874343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.230 qpair failed and we were unable to recover it. 00:28:03.230 [2024-10-17 19:35:26.874535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.874570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.874721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.874757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.874965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.874999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.875125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.875160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.875398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.875432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.875558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.875592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.875751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.875787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.875927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.875959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.876162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.876195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.876395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.876430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.876688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.876723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.876858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.876891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.877038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.877073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.877355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.877389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.877650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.877686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.877894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.877929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.878068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.878103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.878386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.878420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.878547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.878582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.878753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.878786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.878939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.878971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.879266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.879301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.879563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.879607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.879747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.879781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.879932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.879967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.880244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.880277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.880528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.880567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.880791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.880825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.881078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.881112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.881402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.881441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.881574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.881619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.881776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.881811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.882010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.882044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.882325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.882360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.882554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.882589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.882765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.882805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.882961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.882996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.883251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.883286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.883486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.883520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.883662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.231 [2024-10-17 19:35:26.883696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.231 qpair failed and we were unable to recover it. 00:28:03.231 [2024-10-17 19:35:26.883903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.883937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.884127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.884164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.884419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.884453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.884591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.884635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.884826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.884861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.885061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.885094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.885448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.885482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.885708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.885744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.885938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.885972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.886169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.886204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.886404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.886438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.886726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.886764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.886968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.887003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.887282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.887317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.887508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.887543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.887760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.887794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.888047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.888083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.888300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.888336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.888535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.888569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.888722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.888756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.888899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.888933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.889140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.889175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.889446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.889528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.889787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.889841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.890053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.890088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.890363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.890397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.890587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.890638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.890837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.890872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.891060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.891097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.891320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.891354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.891583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.891633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.891776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.891811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.891991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.892027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.892326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.892361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.892626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.892665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.892920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.892954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.893128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.893162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.893347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.893384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.893583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.893635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.893827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.893862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.232 qpair failed and we were unable to recover it. 00:28:03.232 [2024-10-17 19:35:26.894012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.232 [2024-10-17 19:35:26.894048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.894267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.894302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.894448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.894483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.894707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.894743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.894997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.895032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.895172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.895209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.895438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.895473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.895624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.895660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.895953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.895987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.896206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.896248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.896403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.896438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.896702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.896737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.896872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.896906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.897055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.897090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.897226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.897260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.897539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.897573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.897729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.897765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.897961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.897998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.898204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.898237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.898516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.898550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.898764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.898800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.899060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.899095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.899304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.899342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.899484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.899520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.899729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.899767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.899986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.900021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.900253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.900288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.900548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.900583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.900797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.900834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.900956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.900990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.901178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.901213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.901431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.901468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.901653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.901689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.901897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.901933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.902137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.902172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.902366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.902400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.902561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.233 [2024-10-17 19:35:26.902599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.233 qpair failed and we were unable to recover it. 00:28:03.233 [2024-10-17 19:35:26.902824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.902860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.903000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.903035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.903274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.903309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.903545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.903580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.903901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.903937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.904203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.904236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.904508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.904544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.904838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.904874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.905088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.905125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.905347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.905382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.905568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.905618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.905808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.905845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.906056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.906092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.906403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.906439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.906671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.906710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.906992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.907029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.907306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.907343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.907595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.907641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.907930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.907965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.908247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.908282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.908470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.908505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.908789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.908825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.909038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.909074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.909303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.909337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.909471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.909505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.909699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.909734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.909991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.910026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.910253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.910288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.910566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.910616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.910823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.910858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.911087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.911121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.911323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.911358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.911647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.911685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.911828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.911861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.912058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.912095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.912231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.912266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.912568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.912619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.912823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.912858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.913118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.913153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.913293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.913330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.234 [2024-10-17 19:35:26.913567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.234 [2024-10-17 19:35:26.913623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.234 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.913892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.913928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.914143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.914177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.914317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.914354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.914621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.914658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.914802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.914836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.915039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.915073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.915353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.915388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.915628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.915666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.915806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.915840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.916046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.916083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.916291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.916327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.916532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.916568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.916778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.916815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.917019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.917053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.917254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.917290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.917548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.917585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.917835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.917870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.918126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.918163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.918311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.918346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.918563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.918597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.918807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.918841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.919041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.919077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.919304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.919340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.919538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.919574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.919809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.919846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.920105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.920141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.920406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.920442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.920707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.920743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.921029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.921065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.921282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.921319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.921613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.921651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.921915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.921951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.922252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.922287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.922444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.922481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.922754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.922791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.922915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.922949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.923152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.923186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.923499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.923535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.923830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.923865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.924075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.924110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.235 [2024-10-17 19:35:26.924305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.235 [2024-10-17 19:35:26.924346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.235 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.924539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.924575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.924893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.924928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.925121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.925156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.925361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.925398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.925585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.925645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.925903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.925939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.926247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.926281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.926492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.926527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.926772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.926809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.927111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.927146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.927378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.927414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.927615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.927651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.927851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.927886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.928205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.928242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.928537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.928572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.928845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.928879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.929146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.929180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.929463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.929497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.929703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.929739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.929973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.930009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.930325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.930360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.930659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.930697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.930918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.930952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.931155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.931189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.931401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.931434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.931714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.931750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.931953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.931993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.932213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.932247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.932434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.932467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.932582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.932627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.932885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.932920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.933115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.933149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.933344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.933379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.933591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.933638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.933897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.933933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.934121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.934156] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.934414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.934449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.934663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.934701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.934899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.934932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.935142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.935178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.236 qpair failed and we were unable to recover it. 00:28:03.236 [2024-10-17 19:35:26.935486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.236 [2024-10-17 19:35:26.935521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.935715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.935752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.935963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.935998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.936254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.936290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.936556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.936590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.936809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.936845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.937147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.937183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.937407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.937442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.937631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.937667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.937857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.937891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.938083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.938120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.938402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.938436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.938698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.938735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.939036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.939071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.939264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.939298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.939628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.939665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.939876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.939910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.940234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.940269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.940543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.940579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.940793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.940827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.941069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.941104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.941298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.941334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.941640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.941675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.941978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.942012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.942229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.942265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.942534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.942570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.942838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.942874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.943170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.943212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.943402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.943438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.943693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.943731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.943946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.943980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.944256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.944291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.944477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.944513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.944777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.944815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.945092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.945128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.945340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.945373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.237 [2024-10-17 19:35:26.945563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.237 [2024-10-17 19:35:26.945597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.237 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.945804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.945839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.946026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.946063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.946269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.946304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.946580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.946625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.946853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.946888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.947200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.947234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.947443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.947479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.947665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.947701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.947892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.947926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.948204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.948239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.948454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.948488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.948628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.948663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.948921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.948954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.949091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.949126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.949382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.949417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.949621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.949656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.949924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.949959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.950192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.950234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.950515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.950551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.950836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.950873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.951096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.951131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.951406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.951439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.951732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.951769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.951912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.951945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.952226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.952263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.952452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.952486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.952794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.952831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.953041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.953078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.953262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.953297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.953486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.953520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.953703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.953739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.954104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.954182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.954430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.954471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.954755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.954794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.955095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.955130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.955276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.955312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.955623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.955659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.955854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.955890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.956093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.956127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.956383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.238 [2024-10-17 19:35:26.956418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.238 qpair failed and we were unable to recover it. 00:28:03.238 [2024-10-17 19:35:26.956615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.956652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.956851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.956886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.957088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.957122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.957325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.957359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.957540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.957584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.957859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.957894] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.958163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.958199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.958516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.958550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.958834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.958871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.959068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.959102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.959312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.959346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.959544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.959580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.959794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.959831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.960060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.960095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.960298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.960335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.960551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.960587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.960728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.960764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.960951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.960986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.961273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.961308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.961501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.961535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.961824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.961862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.962090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.962125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.962409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.962445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.962723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.962759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.962958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.962992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.963208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.963243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.963371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.963407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.963697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.963733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.963918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.963952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.964214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.964251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.964541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.964575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.964851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.964887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.965108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.965143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.965390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.965424] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.965625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.965662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.965875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.965909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.966107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.966143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.966400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.966435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.966610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.966647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.966870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.966904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.239 qpair failed and we were unable to recover it. 00:28:03.239 [2024-10-17 19:35:26.967092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.239 [2024-10-17 19:35:26.967126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.967317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.967352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.967629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.967666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.967883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.967919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.968151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.968194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.968427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.968464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.968662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.968698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.968959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.968996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.969269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.969306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.969588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.969632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.969825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.969859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.970004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.970041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.970333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.240 [2024-10-17 19:35:26.970366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.240 qpair failed and we were unable to recover it. 00:28:03.240 [2024-10-17 19:35:26.970631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.970667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.970964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.971003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.971262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.971298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.971504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.971538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.971906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.971942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.972171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.972206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.972350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.972385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.972538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.972572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.972788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.972826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.972967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.973001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.973303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.973336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.973571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.973634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.973845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.973881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.974022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.974056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.974259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.974293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.974426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.974461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.974674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.974712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.974940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.974974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.975221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.975301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.975659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.975739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.975952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.975991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.976256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.976293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.976564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.976599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.976890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.976925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.977197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.977231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.977381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.977414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.977658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.977694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.977896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.977930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.978130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.978165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.978370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.978405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.978615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.978651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.978855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.978889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.979180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.979215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.979358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.517 [2024-10-17 19:35:26.979392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.517 qpair failed and we were unable to recover it. 00:28:03.517 [2024-10-17 19:35:26.979611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.979647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.979927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.979961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.980247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.980283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.980561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.980596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.980889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.980923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.981221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.981259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.981522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.981557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.981790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.981824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.982113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.982148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.982423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.982458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.982685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.982723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.982860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.982902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.983030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.983066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.983321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.983357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.983620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.983657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.983941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.983977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.984185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.984220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.984495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.984531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.984739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.984773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.984966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.985000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.985280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.985315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.985439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.985473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.985758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.985795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.985946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.985981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.986176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.986210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.986352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.986387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.986622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.986660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.986855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.986891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.987089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.987124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.987311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.987348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.987617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.987654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.987949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.987984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.988237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.988271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.988514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.988548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.988792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.988828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.989055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.989089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.989307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.989341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.989623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.989661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.989942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.989982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.990108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.990144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.518 [2024-10-17 19:35:26.990347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.518 [2024-10-17 19:35:26.990383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.518 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.990647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.990685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.990973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.991009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.991273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.991309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.991567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.991611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.991872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.991907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.992098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.992133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.992328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.992364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.992485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.992518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.992763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.992800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.993005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.993039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.993224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.993261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.993548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.993583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.993806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.993843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.994029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.994064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.994282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.994318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.994534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.994568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.994866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.994902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.995107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.995140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.995400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.995435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.995567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.995622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.995783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.995821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.996042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.996076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.996357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.996390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.996572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.996619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.996876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.996909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.997135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.997172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.997444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.997478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.997684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.997721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.997924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.997960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.998146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.998180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.998391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.998426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.998658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.998695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.998880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.998914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.999116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.999152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.999374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.999409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.999555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.999587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:26.999815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:26.999850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:27.000059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:27.000093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:27.000317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:27.000358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:27.000597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:27.000656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.519 [2024-10-17 19:35:27.000978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.519 [2024-10-17 19:35:27.001012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.519 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.001207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.001243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.001428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.001464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.001658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.001695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.001813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.001848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.002060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.002096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.002315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.002349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.002628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.002667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.002794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.002830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.002967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.003002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.003186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.003223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.003501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.003535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.003816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.003854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.004074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.004109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.004338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.004372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.004677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.004713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.004999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.005035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.005246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.005280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.005486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.005521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.005731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.005767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.005982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.006017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.006166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.006203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.006467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.006504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.006644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.006680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.006888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.006924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.007126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.007162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.007403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.007437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.007649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.007683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.007823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.007859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.008070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.008105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.008248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.008284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.008500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.008537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.008789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.008825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.008983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.009019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.009226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.009263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.009557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.009594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.009762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.009800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.009939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.009974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.010175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.010210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.010418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.010454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.010660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.010696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.520 [2024-10-17 19:35:27.010817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.520 [2024-10-17 19:35:27.010852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.520 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.011133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.011169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.011322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.011357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.011507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.011541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.011674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.011709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.011842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.011876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.012091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.012125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.012271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.012304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.012467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.012500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.012651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.012686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.012831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.012868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.013073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.013107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.013302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.013338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.013556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.013590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.013726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.013761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.013908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.013942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.014132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.014166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.014352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.014387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.014516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.014550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.014699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.014737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.014866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.014900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.015027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.015070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.015218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.015265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.015489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.015540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.015734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.015787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.015947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.016005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.016177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.016218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.016364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.016407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.016643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.016692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.016846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.016885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.017015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.017052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.017307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.017344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.017462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.017496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.017630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.017666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.521 qpair failed and we were unable to recover it. 00:28:03.521 [2024-10-17 19:35:27.017800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.521 [2024-10-17 19:35:27.017837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.017966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.018128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.018293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.018469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.018635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.018797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.018832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.019869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.019905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.020025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.020062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.020185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.020220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.020332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.020366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.020565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.020611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.020800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.020838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.021026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.021061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.021430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.021465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.021697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.021734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.021949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.021983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.022178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.022216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.022475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.022511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.022709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.022744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.022952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.022987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.023175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.023209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.023485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.023519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.023726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.023764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.023972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.024007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.024145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.024180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.024447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.024482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.024682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.024725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.024859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.024895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.025028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.025064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.025304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.025342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.025596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.025643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.025783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.025820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.522 [2024-10-17 19:35:27.026131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.522 [2024-10-17 19:35:27.026166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.522 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.026370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.026406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.026625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.026663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.026923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.026958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.027163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.027197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.027383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.027420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.027624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.027661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.027944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.027980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.028177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.028214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.028423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.028457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.028650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.028685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.028851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.028888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.029098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.029135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.029327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.029362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.029508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.029545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.029816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.029851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.030131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.030165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.030386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.030422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.030559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.030593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.030841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.030878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.031126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.031163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.031370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.031410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.031683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.031721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.031924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.031961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.032165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.032201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.032452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.032490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.032692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.032731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.033019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.033054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.033249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.033285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.033562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.033596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.033865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.523 [2024-10-17 19:35:27.033900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.523 qpair failed and we were unable to recover it. 00:28:03.523 [2024-10-17 19:35:27.034091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.034125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.034271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.034305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.034489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.034525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.034727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.034763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.034958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.034994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.035207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.035242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.035457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.035490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.035682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.035718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.035974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.036010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.036214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.036251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.036460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.036497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.036778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.036813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.037038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.037076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.037337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.037373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.037637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.037674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.037955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.037989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.038102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.038138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.038281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.038318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.038475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.038509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.038739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.038776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.038985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.039018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.039223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.039259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.039454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.039489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.039685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.039721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.039979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.040014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.040273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.040307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.040512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.040547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.040787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.040822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.041013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.041048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.041254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.041290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.041558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.041591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.041829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.041872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.042093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.042127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.042416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.042451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.042659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.042695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.042856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.042889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.043102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.043137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.043357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.524 [2024-10-17 19:35:27.043391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.524 qpair failed and we were unable to recover it. 00:28:03.524 [2024-10-17 19:35:27.043588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.043635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.043778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.043812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.044020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.044054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.044259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.044294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.044553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.044590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.044736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.044770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.044911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.044947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.045085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.045121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.045435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.045470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.045715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.045750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.045957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.045992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.046191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.046225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.046421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.046455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.046700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.046738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.046941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.046978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.047233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.047269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.047479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.047514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.047764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.047801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.048080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.048115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.048402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.048437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.048732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.048775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.048984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.049018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.049217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.049252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.049402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.049441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.049681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.049719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.049998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.050032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.050236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.050271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.050407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.050440] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.050565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.050610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.050814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.050849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.050985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.051020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.051296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.051332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.051533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.051570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.051883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.051917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.052223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.052257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.052535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.052571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.052732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.052768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.052954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.052989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.053192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.053228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.053482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.525 [2024-10-17 19:35:27.053517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.525 qpair failed and we were unable to recover it. 00:28:03.525 [2024-10-17 19:35:27.053780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.053815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.053955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.053991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.054244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.054278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.054532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.054566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.054866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.054903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.055166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.055200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.055422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.055458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.055644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.055682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.055872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.055907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.056186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.056220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.056418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.056453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.056593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.056640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.056929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.056963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.057266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.057302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.057445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.057481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.057697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.057733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.057951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.057985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.058269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.058305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.058586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.058633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.058826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.058862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.059046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.059083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.059210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.059250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.059522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.059555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.059768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.059807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.059949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.059982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.060177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.060211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.060482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.060518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.060662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.060700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.060904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.060940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.061147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.061180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.061403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.061437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.061657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.061693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.061971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.062008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.062312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.062347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.062598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.062643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.526 qpair failed and we were unable to recover it. 00:28:03.526 [2024-10-17 19:35:27.062908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.526 [2024-10-17 19:35:27.062943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.063087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.063122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.063378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.063412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.063671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.063707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.064011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.064046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.064326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.064361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.064592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.064639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.064836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.064870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.065162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.065196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.065384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.065421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.065657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.065694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.065929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.065967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.066271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.066307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.066579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.066629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.066824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.066862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.067067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.067102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.067378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.067414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.067668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.067707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.067993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.068027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.068313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.068347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.068533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.068569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.068799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.068838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.069027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.069061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.069365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.069400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.069618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.069656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.069917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.069953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.070154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.070189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.070398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.070435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.070697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.070734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.070999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.071033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.071273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.071308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.071494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.071528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.071728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.071765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.072059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.072097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.072297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.072331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.072524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.072561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.072874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.072910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.073025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.073060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.073367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.073402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.073545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.073582] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.073795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.527 [2024-10-17 19:35:27.073832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.527 qpair failed and we were unable to recover it. 00:28:03.527 [2024-10-17 19:35:27.074056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.074093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.074280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.074314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.074522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.074558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.074843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.074880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.075181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.075215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.075450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.075487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.075673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.075731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.076003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.076037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.076252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.076287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.076540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.076574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.076801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.076836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.076984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.077020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.077206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.077242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.077518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.077559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.077810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.077846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.078134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.078171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.078439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.078476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.078673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.078712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.078858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.078892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.079098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.079135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.079252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.079286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.079565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.079611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.079744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.079782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.079978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.080013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.080333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.080369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.080597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.080644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.080849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.080884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.081077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.081113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.081306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.081340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.081549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.081586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.081826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.081861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.082119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.082154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.082281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.082318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.082506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.082542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.528 [2024-10-17 19:35:27.082757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.528 [2024-10-17 19:35:27.082793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.528 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.082945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.082980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.083262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.083296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.083494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.083529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.083719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.083757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.084032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.084066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.084328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.084365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.084487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.084522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.084712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.084748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.084884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.084920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.085199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.085234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.085508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.085543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.085774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.085810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.086083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.086119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.086322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.086358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.086580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.086628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.086828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.086865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.086997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.087034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.087291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.087328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.087633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.087670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.087872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.087907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.088033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.088068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.088323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.088358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.088570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.088612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.088870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.088904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.089090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.089124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.089391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.089425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.089644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.089680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.089868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.089903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.090171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.090205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.090501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.090535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.090680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.090716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.090907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.090942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.091264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.091298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.091502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.091536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.091675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.091711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.092006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.092041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.092332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.092367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.092572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.092619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.092900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.092934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.093205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.529 [2024-10-17 19:35:27.093240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.529 qpair failed and we were unable to recover it. 00:28:03.529 [2024-10-17 19:35:27.093531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.093566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.093864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.093900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.094164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.094198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.094495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.094529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.094723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.094759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.095016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.095050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.095307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.095348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.095535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.095570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.095854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.095888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.096090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.096124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.096411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.096445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.096631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.096666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.096928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.096962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.097243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.097278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.097562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.097596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.097891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.097926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.098195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.098230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.098509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.098543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.098833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.098868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.099147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.099181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.099466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.099501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.099632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.099668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.099874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.099909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.100106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.100141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.100392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.100425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.100639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.100675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.100865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.100899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.101031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.101065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.101345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.101379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.101638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.101674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.101975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.102009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.102210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.102244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.102452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.102486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.102764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.102800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.103080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.530 [2024-10-17 19:35:27.103115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.530 qpair failed and we were unable to recover it. 00:28:03.530 [2024-10-17 19:35:27.103369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.103404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.103631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.103666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.103888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.103923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.104048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.104082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.104273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.104307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.104564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.104599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.104904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.104939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.105139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.105174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.105373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.105407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.105691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.105727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.105913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.105948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.106138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.106172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.106353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.106399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.106680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.106716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.106920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.106954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.107160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.107196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.107474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.107508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.107764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.107800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.107997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.108032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.108225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.108260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.108444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.108478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.108762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.108798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.109061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.109095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.109295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.109330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.109518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.109553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.109681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.109715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.109999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.110034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.110311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.110346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.110560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.110595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.110861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.110896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.111150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.111184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.111483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.111517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.111807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.111843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.112032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.112066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.112327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.112362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.112622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.112658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.112869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.112903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.113112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.113146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.113328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.531 [2024-10-17 19:35:27.113363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.531 qpair failed and we were unable to recover it. 00:28:03.531 [2024-10-17 19:35:27.113620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.113661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.113944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.113978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.114186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.114221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.114517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.114551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.114757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.114794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.114942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.114976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.115255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.115288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.115565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.115611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.115885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.115920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.116203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.116238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.116552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.116586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.116832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.116867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.117130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.117164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.117436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.117470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.117760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.117796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.118006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.118040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.118317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.118352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.118634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.118670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.118950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.118985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.119183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.119217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.119523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.119556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.119828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.119863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.120059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.120093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.120279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.120314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.120511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.120545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.120747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.120782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.120980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.121014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.121268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.121303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.121492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.121526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.121716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.121751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.121969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.122003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.122256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.122290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.122489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.122523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.122801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.122837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.123041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.123076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.123279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.123313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.123533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.123568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.123830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.123866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.124145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.532 [2024-10-17 19:35:27.124179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.532 qpair failed and we were unable to recover it. 00:28:03.532 [2024-10-17 19:35:27.124483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.124517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.124725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.124762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.125042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.125082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.125346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.125380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.125584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.125630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.125772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.125808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.126081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.126117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.126346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.126382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.126661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.126698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.126836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.126870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.127007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.127042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.127325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.127361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.127492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.127527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.127721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.127757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.127900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.127935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.128137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.128171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.128381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.128416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.128622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.128661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.128804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.128839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.129028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.129065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.129344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.129381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.129581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.129623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.129912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.129947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.130149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.130183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.130459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.130495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.130747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.130785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.131088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.131123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.131334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.131368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.131630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.131675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.131909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.131951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.132165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.132199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.132508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.132542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.132752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.132788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.133060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.133095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.133300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.133336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.533 [2024-10-17 19:35:27.133593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.533 [2024-10-17 19:35:27.133638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.533 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.133853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.133889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.134146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.134182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.134427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.134461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.134717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.134754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.135011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.135045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.135243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.135278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.135554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.135589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.135907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.135944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.136174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.136208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.136400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.136437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.136692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.136728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.137007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.137042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.137272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.137307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.137565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.137609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.137893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.137927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.138214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.138250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.138529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.138565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.138782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.138818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.139014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.139050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.139331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.139366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.139572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.139616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.139883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.139918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.140174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.140210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.140519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.140553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.140781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.140818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.141098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.141134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.141366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.141400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.141675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.141713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.141963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.142001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.142218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.142252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.142393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.142428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.142686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.142724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.142981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.143018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.143229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.143263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.143473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.143514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.143819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.143854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.144133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.144169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.144450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.144484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.144622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.534 [2024-10-17 19:35:27.144658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.534 qpair failed and we were unable to recover it. 00:28:03.534 [2024-10-17 19:35:27.144960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.144995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.145202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.145239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.145518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.145554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.145837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.145873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.146154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.146191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.146471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.146504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.146625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.146661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.146880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.146917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.147135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.147168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.147361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.147396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.147589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.147650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.147881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.147914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.148195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.148229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.148428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.148463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.148718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.148754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.148937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.148972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.149227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.149263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.149481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.149517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.149709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.149745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.149938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.149973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.150252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.150290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.150433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.150467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.150745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.150786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.150918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.150954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.151096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.151132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.151336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.151371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.151644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.151681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.151917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.151953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.152234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.152268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.152522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.152558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.152778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.152814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.153073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.153107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.153293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.535 [2024-10-17 19:35:27.153326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.535 qpair failed and we were unable to recover it. 00:28:03.535 [2024-10-17 19:35:27.153529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.153564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.153852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.153888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.154102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.154137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.154405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.154441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.154708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.154744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.155035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.155070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.155264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.155298] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.155564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.155608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.155809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.155845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.156101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.156136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.156411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.156447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.156702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.156737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.157033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.157066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.157360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.157398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.157599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.157645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.157930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.157966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.158129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.158165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.158354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.158390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.158583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.158629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.158911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.158945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.159140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.159175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.159457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.159492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.159752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.159788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.160076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.160110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.160306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.160341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.160611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.160647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.160847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.160883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.161165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.161200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.161478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.161514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.161779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.161816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.161955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.162002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.162213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.162247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.162402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.162439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.162666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.162704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.162991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.163026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.163297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.163332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.163545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.163579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.163819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.536 [2024-10-17 19:35:27.163855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.536 qpair failed and we were unable to recover it. 00:28:03.536 [2024-10-17 19:35:27.164003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.164039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.164259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.164295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.164433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.164470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.164720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.164756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.164941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.164976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.165257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.165294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.165573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.165619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.165843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.165876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.166153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.166190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.166384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.166418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.166626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.166664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.166851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.166889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.167104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.167141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.167258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.167292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.167567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.167619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.167846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.167882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.168085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.168120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.168377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.168412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.168562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.168598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.168915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.168951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.169259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.169294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.169417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.169452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.169644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.169682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.169888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.169922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.170183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.170220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.170401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.170435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.170554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.170591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.170852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.170888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.171144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.171178] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.171435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.171472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.171664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.171701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.171909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.171946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.172131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.172166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.172385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.172419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.172613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.172649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.172834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.172868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.173146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.173180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.173365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.173400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.173661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.173696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.173887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.173921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.537 [2024-10-17 19:35:27.174103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.537 [2024-10-17 19:35:27.174138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.537 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.174325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.174359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.174584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.174628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.174817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.174852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.175035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.175070] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.175353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.175388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.175585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.175643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.175921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.175956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.176243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.176277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.176533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.176567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.176874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.176910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.177198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.177232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.177366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.177400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.177625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.177661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.177960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.177995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.178250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.178285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.178580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.178622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.178884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.178919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.179106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.179141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.179347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.179382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.179660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.179702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.179935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.179971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.180104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.180138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.180345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.180380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.180636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.180672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.180957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.180992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.181269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.181304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.181585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.181631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.181921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.181956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.182216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.182250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.182391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.182425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.182680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.182716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.183020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.183054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.183274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.183309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.183574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.183619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.183879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.183914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.184202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.184236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.184529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.184563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.184795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.184831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.185085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.185119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.185423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.538 [2024-10-17 19:35:27.185458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.538 qpair failed and we were unable to recover it. 00:28:03.538 [2024-10-17 19:35:27.185719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.185755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.186026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.186061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.186358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.186392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.186533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.186567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.186870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.186907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.187110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.187145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.187399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.187434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.187635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.187671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.187854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.187890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.188139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.188173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.188476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.188510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.188786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.188822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.189104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.189138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.189325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.189359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.189624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.189659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.189917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.189952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.190230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.190264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.190525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.190561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.190868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.190903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.191157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.191192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.191402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.191437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.191652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.191688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.191889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.191925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.192113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.192147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.192344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.192379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.192633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.192670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.192859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.192893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.193149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.193184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.193385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.193420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.193614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.193649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.193953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.193988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.194250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.194284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.194532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.194567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.194830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.194867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.195095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.539 [2024-10-17 19:35:27.195129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.539 qpair failed and we were unable to recover it. 00:28:03.539 [2024-10-17 19:35:27.195313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.195348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.195618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.195654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.195929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.195964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.196243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.196277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.196523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.196558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.196894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.196929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.197117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.197152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.197366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.197401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.197660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.197696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.197994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.198028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.198292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.198326] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.198636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.198671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.198932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.198974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.199257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.199291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.199514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.199549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.199755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.199790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.199974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.200008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.200153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.200187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.200386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.200421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.200674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.200710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.200853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.200888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.201165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.201199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.201503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.201538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.201824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.201859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.202132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.202167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.202367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.202402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.202614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.202650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.202836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.202872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.203148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.203181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.203450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.203486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.203780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.203816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.203935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.203969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.204245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.204280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.204535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.204570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.204790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.204826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.205041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.540 [2024-10-17 19:35:27.205075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.540 qpair failed and we were unable to recover it. 00:28:03.540 [2024-10-17 19:35:27.205331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.205365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.205495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.205529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.205808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.205844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.206061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.206095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.206425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.206459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.206611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.206647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.206769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.206803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.207089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.207124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.207396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.207430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.207632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.207668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.207924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.207959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.208082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.208115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.208370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.208405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.208685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.208721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.208988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.209023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.209223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.209257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.209523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.209558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.209819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.209861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.210142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.210177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.210359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.210394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.210662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.210699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.210885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.210919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.211110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.211144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.211422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.211457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.211737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.211773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.212053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.212088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.212286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.212320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.212617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.212654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.212841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.212876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.213185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.213219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.213496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.213531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.213840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.213877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.214064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.214098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.214386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.214421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.214709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.214744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.215018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.215053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.215330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.215364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.215683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.215719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.215998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.216032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.216158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.541 [2024-10-17 19:35:27.216194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.541 qpair failed and we were unable to recover it. 00:28:03.541 [2024-10-17 19:35:27.216322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.216357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.216544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.216578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.216882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.216918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.217196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.217231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.217514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.217559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.217822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.217857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.218071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.218104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.218379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.218413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.218637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.218673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.218928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.218963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.219252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.219287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.219428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.219462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.219719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.219756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.219959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.219994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.220263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.220296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.220580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.220623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.220811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.220846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.220992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.221027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.221228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.221263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.221460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.221494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.221698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.221733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.222010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.222045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.222350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.222385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.222585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.222628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.222904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.222938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.223124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.223159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.223340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.223376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.223656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.223692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.223822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.223856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.224111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.224145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.224447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.224482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.224769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.224804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.225115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.225150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.225427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.225461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.225750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.225786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.225921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.225955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.226161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.226195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.226454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.226489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.226689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.226726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.226909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.542 [2024-10-17 19:35:27.226944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.542 qpair failed and we were unable to recover it. 00:28:03.542 [2024-10-17 19:35:27.227148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.227182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.227419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.227453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.227663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.227700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.227982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.228016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.228296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.228332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.228622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.228664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.228929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.228963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.229160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.229195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.229448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.229482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.229704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.229741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.229926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.229961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.230240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.230276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.230497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.230532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.230783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.230819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.231005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.231039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.231233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.231267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.231414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.231449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.231655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.231691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.231951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.231986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.232199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.232233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.232420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.232455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.232762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.232798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.233007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.233042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.233307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.233341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.233651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.233687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.233933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.233969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.234269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.234304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.234570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.234612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.234811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.234846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.235127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.235161] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.235306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.235341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.235570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.235613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.235896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.235936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.236200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.236235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.236528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.236563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.236835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.236871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.237161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.237195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.237461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.237496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.237795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.237831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.238111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.238147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.543 qpair failed and we were unable to recover it. 00:28:03.543 [2024-10-17 19:35:27.238361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.543 [2024-10-17 19:35:27.238396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.238705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.238740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.239020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.239055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.239362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.239397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.239609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.239644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.239786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.239821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.240099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.240179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.240440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.240478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.240624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.240659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.240864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.240900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.241157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.241191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.241395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.241429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.241703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.241739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.241938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.241972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.242156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.242189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.242387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.242421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.242639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.242682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.242880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.242916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.243141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.243175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.243455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.243500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.243708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.243744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.244030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.244063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.244303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.244339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.244618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.244653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.244862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.244897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.245090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.245123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.245319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.245353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.245658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.245695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.245882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.245916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.246062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.246097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.246376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.246410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.246623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.246659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.246916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.246950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.247217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.247252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.544 [2024-10-17 19:35:27.247534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.544 [2024-10-17 19:35:27.247568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.544 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.247858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.247896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.248170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.248204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.248487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.248521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.248804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.248841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.249147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.249181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.249444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.249478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.249684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.249719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.249927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.249961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.250168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.250203] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.250402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.250436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.250622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.250657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.250922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.250956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.251235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.251269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.251472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.251506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.251761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.251798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.252096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.252132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.252399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.252433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.252657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.252692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.252975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.253010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.253211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.253245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.253393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.253428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.253723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.253759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.254014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.254049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.254194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.254229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.254488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.254530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.254791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.254826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.255113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.255147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.255340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.255375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.255585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.255638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.255837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.255871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.256075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.256109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.256315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.256349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.256548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.256583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.256817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.256852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.257105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.257139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.257395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.257429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.257712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.257746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.258053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.258086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.258403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.258439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.545 [2024-10-17 19:35:27.258720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.545 [2024-10-17 19:35:27.258755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.545 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.259034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.259068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.259299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.259333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.259560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.259594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.259818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.259855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.260164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.260197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.260428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.260462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.260744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.260780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.261055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.261090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.261376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.261409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.261690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.261726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.261925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.261959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.262083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.262117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.262382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.262417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.262637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.262672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.262870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.262904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.263207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.263241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.263526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.263564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.263794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.263831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.264061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.264094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.264279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.264312] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.264581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.264632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.264832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.264866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.265171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.265205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.265485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.265518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.265804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.265848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.266114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.266147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.266430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.266465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.266619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.266656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.266934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.266968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.267174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.267208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.267410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.267445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.267725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.267763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.268062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.268097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.268378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.268413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.268637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.268673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.268934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.268969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.269110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.269144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.269398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.269432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.269733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.546 [2024-10-17 19:35:27.269770] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.546 qpair failed and we were unable to recover it. 00:28:03.546 [2024-10-17 19:35:27.269975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.270010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.270242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.270276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.270460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.270495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.270761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.270797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.271007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.271041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.271315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.271349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.271618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.271666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.271933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.271966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.272178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.272212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.272489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.272523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.272730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.272766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.273043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.273076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.273479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.273561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.273874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.273914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.274105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.274140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.274419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.274453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.274760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.274797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.275006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.275040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.275319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.275353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.275655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.275692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.275951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.275985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.276281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.276315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.276582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.276625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.276842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.276876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.277133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.277167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.277397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.277431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.277749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.277785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.278005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.278039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.278227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.278261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.278537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.278572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.278840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.278875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.279097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.279130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.279331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.279367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.279508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.279543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.279755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.279790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.280017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.280050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.280267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.280301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.280531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.280564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.280779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.280814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.281118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.547 [2024-10-17 19:35:27.281160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.547 qpair failed and we were unable to recover it. 00:28:03.547 [2024-10-17 19:35:27.281354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.281388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.281628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.281664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.281851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.281886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.282152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.282186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.282467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.282502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.282785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.282823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.283119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.283154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.283382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.283416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.283694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.283752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.284041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.284076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.548 [2024-10-17 19:35:27.284349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.548 [2024-10-17 19:35:27.284383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.548 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.284594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.284639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.284863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.284900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.285184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.285219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.285417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.285450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.285642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.285678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.285959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.285993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.286179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.286213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.286422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.286457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.286726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.286762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.287020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.287055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.287308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.287343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.287654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.287689] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.287914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.287948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.288220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.288254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.288537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.288571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.288863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.288900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.289112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.289146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.289331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.289365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.289579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.289624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.289900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.289934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.290210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.290244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.290439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.290474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.290737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.290772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.291063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.291097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.291342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.291377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.291564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.291598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.291897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.291932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.292199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.292232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.292526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.292560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.292842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.292885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.293074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.293107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.293381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.293415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.293673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.293709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.293965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.294000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.294299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.294333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.294532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.294566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.294763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.294799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.295078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.295112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.295381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.295415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.826 [2024-10-17 19:35:27.295624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.826 [2024-10-17 19:35:27.295662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.826 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.295929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.295964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.296181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.296215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.296466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.296501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.296704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.296741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.297015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.297050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.297327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.297362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.297574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.297619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.297904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.297939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.298212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.298247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.298454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.298488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.298669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.298705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.298902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.298936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.299070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.299104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.299240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.299273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.299485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.299519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.299811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.299848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.300036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.300075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.300383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.300417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.300625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.300661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.300938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.300972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.301239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.301273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.301567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.301614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.301858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.301893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.302191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.302225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.302523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.302557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.302836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.302871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.303156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.303190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.303372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.303407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.303680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.303717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.303995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.304029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.304247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.304282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.304536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.304570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.304857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.304892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.305190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.305223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.305505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.305540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.305735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.305769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.305973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.306007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.306198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.306232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.306485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.306519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.827 [2024-10-17 19:35:27.306748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.827 [2024-10-17 19:35:27.306784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.827 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.307041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.307073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.307369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.307403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.307619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.307655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.307913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.307947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.308235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.308269] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.308565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.308599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.308917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.308952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.309229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.309263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.309455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.309490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.309753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.309788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.310017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.310052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.310238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.310273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.310488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.310747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.310783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.311071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.311106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.311401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.311434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.311637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.311673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.311916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.311957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.312257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.312292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.312481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.312515] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.312798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.312834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.313112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.313147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.313429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.313463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.313597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.313646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.313854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.313889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.314151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.314185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.314468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.314503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.314805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.314840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.315030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.315064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.315217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.315251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.315505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.315538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.315838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.315874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.316080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.316114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.316369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.316403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.316679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.316715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.316999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.317033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.317317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.317351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.317631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.317666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.317949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.317985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.318204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.828 [2024-10-17 19:35:27.318239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.828 qpair failed and we were unable to recover it. 00:28:03.828 [2024-10-17 19:35:27.318516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.318550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.318754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.318789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.318992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.319026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.319230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.319264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.319540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.319581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.319868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.319903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.320175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.320208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.320435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.320468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.320686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.320723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.321006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.321041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.321320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.321354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.321658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.321694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.321949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.321983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.322283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.322317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.322466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.322500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.322784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.322820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.323017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.323051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.323310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.323344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.323631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.323667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.323944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.323978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.324177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.324211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.324487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.324522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.324803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.324840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.325030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.325064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.325340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.325375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.325644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.325679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.325952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.325986] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.326277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.326311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.326519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.326553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.326841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.326875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.327077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.327111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.327410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.327444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.327754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.327790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.328002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.328036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.328320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.328354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.328657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.328692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.328976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.329010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.329212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.329247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.329524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.329558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.329767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.329802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.829 [2024-10-17 19:35:27.329933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.829 [2024-10-17 19:35:27.329968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.829 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.330218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.330252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.330444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.330478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.330760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.330795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.331099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.331133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.331409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.331449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.331654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.331690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.331900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.331935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.332212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.332245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.332542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.332576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.332809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.332844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.333073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.333107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.333321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.333355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.333654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.333691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.333959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.333994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.334288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.334322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.334519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.334553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.334864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.334899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.335154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.335187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.335446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.335482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.335677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.335712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.335911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.335945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.336151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.336185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.336500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.336534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.336748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.336784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.337054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.337088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.337379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.337413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.337688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.337724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.337942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.337977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.338277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.338311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.338494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.338528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.338751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.338787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.339089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.339129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.339407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.339442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.339723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.339759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.340039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.830 [2024-10-17 19:35:27.340073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.830 qpair failed and we were unable to recover it. 00:28:03.830 [2024-10-17 19:35:27.340273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.340308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.340582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.340634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.340842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.340876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.341150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.341185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.341474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.341508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.341704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.341740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.341927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.341961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.342178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.342213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.342414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.342448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.342633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.342668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.342861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.342895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.343155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.343188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.343381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.343415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.343667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.343703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.343985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.344018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.344321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.344354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.344638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.344674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.344861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.344896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.345017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.345051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.345350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.345384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.345683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.345719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.345906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.345939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.346128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.346162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.346435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.346469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.346742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.346778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.347070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.347104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.347305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.347340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.347482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.347516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.347793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.347828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.348083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.348117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.348421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.348456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.348753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.348789] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.349115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.349149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.349409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.349443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.349630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.349666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.349901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.349934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.350165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.350199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.350388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.350429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.350652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.350687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.350990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.351024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.351283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.831 [2024-10-17 19:35:27.351317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.831 qpair failed and we were unable to recover it. 00:28:03.831 [2024-10-17 19:35:27.351445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.351480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.351737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.351773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.352053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.352087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.352390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.352425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.352683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.352719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.353028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.353062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.353337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.353371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.353632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.353668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.353879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.353912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.354167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.354201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.354409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.354444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.354712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.354748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.355009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.355042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.355161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.355196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.355473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.355506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.355705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.355741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.355875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.355909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.356110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.356144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.356356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.356389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.356694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.356730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.356929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.356963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.357243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.357277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.357554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.357588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.357758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.357793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.357987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.358022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.358277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.358311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.358571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.358617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.358918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.358953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.359139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.359172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.359370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.359403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.359632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.359668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.359957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.359992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.360224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.360258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.360443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.360477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.360752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.360788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.361059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.361092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.361345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.361378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.361641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.361677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.361865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.361898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.362177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.362210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.362337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.832 [2024-10-17 19:35:27.362370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.832 qpair failed and we were unable to recover it. 00:28:03.832 [2024-10-17 19:35:27.362625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.362660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.362867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.362901] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.363104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.363138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.363321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.363354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.363550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.363585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.363875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.363908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.364202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.364236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.364419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.364454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.364711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.364748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.364947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.364981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.365251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.365286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.365558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.365591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.365912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.365947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.366223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.366256] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.366538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.366573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.366800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.366834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.367022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.367059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.367318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.367353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.367552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.367587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2256716 Killed "${NVMF_APP[@]}" "$@" 00:28:03.833 [2024-10-17 19:35:27.367739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.367774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.367977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.368011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.368215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.368250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:03.833 [2024-10-17 19:35:27.368552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.368621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:03.833 [2024-10-17 19:35:27.368877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.368911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:03.833 [2024-10-17 19:35:27.369196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.369230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:03.833 [2024-10-17 19:35:27.369467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.369502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.833 [2024-10-17 19:35:27.369785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.369821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.370044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.370078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.370262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.370297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.370499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.370534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.370750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.370787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.371065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.371099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.371401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.371435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.371688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.371724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.372011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.372051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.372309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.372343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.372478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.372513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.372799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.372835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.833 [2024-10-17 19:35:27.372971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.833 [2024-10-17 19:35:27.373005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.833 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.373196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.373231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.373549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.373580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.373776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.373809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.374064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.374099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.374362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.374398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.374581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.374628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.374917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.374952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.375161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.375198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.375456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.375492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.375708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.375744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.376004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.376038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.376345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.376382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.376656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=2257433 00:28:03.834 [2024-10-17 19:35:27.376694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 2257433 00:28:03.834 [2024-10-17 19:35:27.377009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.377044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:03.834 [2024-10-17 19:35:27.377313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.377347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2257433 ']' 00:28:03.834 [2024-10-17 19:35:27.377639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.377680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.834 [2024-10-17 19:35:27.377946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.377982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:03.834 [2024-10-17 19:35:27.378254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.378288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.834 [2024-10-17 19:35:27.378537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.378579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:03.834 [2024-10-17 19:35:27.378883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.378919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.379205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.379242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.379515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.379549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.379790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.379826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.380027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.380061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.380297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.380332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.380645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.380684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.380917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.380952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.381190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.381226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.381351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.381385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.381592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.381646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.381850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.381883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.382071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.834 [2024-10-17 19:35:27.382116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.834 qpair failed and we were unable to recover it. 00:28:03.834 [2024-10-17 19:35:27.382338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.382375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.382681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.382718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.382964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.382998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.383249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.383286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.383540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.383575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.383788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.383823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.384103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.384137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.384341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.384376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.384636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.384675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.384880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.384915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.385178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.385214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.385351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.385386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.385667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.385703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.385918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.385953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.386168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.386205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.386509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.386543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.386769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.386806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.386999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.387036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.387293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.387327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.387633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.387672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.387951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.387999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.388147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.388182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.388902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.388943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.389250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.389285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.389569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.389615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.389900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.389937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.390147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.390189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.390444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.390481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.390741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.390779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.390996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.391032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.391272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.391306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.391569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.391616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.391830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.391867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.392062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.392097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.392310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.392344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.392623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.392661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.392961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.392996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.393241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.393276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.393476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.393511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.393698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.393736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.394042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.835 [2024-10-17 19:35:27.394080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.835 qpair failed and we were unable to recover it. 00:28:03.835 [2024-10-17 19:35:27.394348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.394384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.394583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.394628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.394837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.394871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.395104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.395139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.395331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.395367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.395563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.395599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.395829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.395864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.396147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.396182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.396457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.396492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.396722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.396759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.396954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.396990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.397191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.397225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.397446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.397481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.397743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.397781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.398048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.398083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.398380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.398414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.398701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.398738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.398872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.398908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.399113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.399147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.399281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.399317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.399514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.399550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.399842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.399877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.400087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.400123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.400319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.400357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.400567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.400611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.400872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.400908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.401046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.401088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.401312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.401346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.401568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.401616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.401895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.401931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.402115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.402151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.402347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.402383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.402568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.402619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.402900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.402934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.403124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.403158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.403417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.403451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.403677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.403716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.403915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.403950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.404161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.404198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.404406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.404442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.404754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.404790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.836 qpair failed and we were unable to recover it. 00:28:03.836 [2024-10-17 19:35:27.404982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.836 [2024-10-17 19:35:27.405018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.405215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.405249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.405444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.405478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.405757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.405792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.405923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.405958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.406109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.406145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.406409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.406444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.406568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.406630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.406911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.406945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.407229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.407264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.407491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.407527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.407657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.407694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.407903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.407944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.408141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.408175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.408479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.408514] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.408798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.408833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.409116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.409150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.409305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.409341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.409541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.409577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.409847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.409882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.410084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.410118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.410415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.410450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.410649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.410687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.410877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.410913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.411168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.411205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.411413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.411449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.411734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.411815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.412174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.412250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.412566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.412618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.412841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.412877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.413000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.413056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.413334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.413370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.413559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.413592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.413809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.413844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.414102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.414136] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.414321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.414355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.414549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.414584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.414874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.414909] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.415114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.415149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.415426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.837 [2024-10-17 19:35:27.415471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.837 qpair failed and we were unable to recover it. 00:28:03.837 [2024-10-17 19:35:27.415676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.415714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.415925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.415961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.416152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.416187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.416450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.416486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.416747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.416783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.417066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.417102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.417400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.417436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.417702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.417737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.417928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.417963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.418193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.418227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.418500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.418533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.418742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.418780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.418973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.419008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.419293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.419329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.419590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.419639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.419849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.419883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.420088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.420122] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.420233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.420267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.420541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.420575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.420744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.420779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.421010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.421045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.421308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.421344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.421543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.421579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.421719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.421755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.421866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.421899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.422085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.422119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.422398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.422443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.422660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.422695] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.422889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.422923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.423230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.423264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.423403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.423438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.423637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.423673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.423906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.423940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.424143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.424177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.424367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.424402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.424553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.838 [2024-10-17 19:35:27.424587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.838 qpair failed and we were unable to recover it. 00:28:03.838 [2024-10-17 19:35:27.424806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.424841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.424967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.425004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.425140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.425173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.425315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.425359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.425559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.425593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.425795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.425830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.426112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.426147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.426268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.426304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.426446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.426482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.426621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.426656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.426842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.426874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.427029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.427065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.427322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.427357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.427483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.427519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.427793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.427831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.427972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.428008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.428259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.428294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.428513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.428547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.428821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.428856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.429063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.429098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b9[2024-10-17 19:35:27.429091] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:28:03.839 0 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.429139] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.839 [2024-10-17 19:35:27.429241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.429277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.429399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.429432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.429622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.429654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.429850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.429882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.430155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.430187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.430447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.430481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.430716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.430752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.430940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.430978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.431232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.431266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.431539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.431572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.431885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.431923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.432128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.432165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.432297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.432333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.432620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.432657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.432935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.432970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.433121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.433158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.433449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.433485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.433675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.433710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.433896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.433931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.839 qpair failed and we were unable to recover it. 00:28:03.839 [2024-10-17 19:35:27.434210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.839 [2024-10-17 19:35:27.434245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.434442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.434476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.434749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.434785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.435000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.435042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.435237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.435271] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.435442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.435477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.435663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.435699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.435906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.435940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.436262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.436297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.436561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.436594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.436891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.436927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.437185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.437220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.437417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.437452] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.437681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.437718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.437831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.437864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.438057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.438092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.438308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.438342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.438528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.438563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.438782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.438815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.439019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.439051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.439303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.439337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.439631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.439667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.439870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.439906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.440188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.440221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.440503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.440543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.440812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.440847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.441098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.441132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.441387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.441422] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.441676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.441710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.441926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.441960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.442238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.442273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.442536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.442568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.442792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.442853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.443075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.443109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.443395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.443430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.443556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.443589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.443837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.443873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.444158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.444193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.444443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.840 [2024-10-17 19:35:27.444476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.840 qpair failed and we were unable to recover it. 00:28:03.840 [2024-10-17 19:35:27.444617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.444654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.444855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.444889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.445025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.445061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.445310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.445344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.445624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.445665] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.445867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.445902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.446089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.446124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.446373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.446407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.446655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.446690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.446909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.446945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.447219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.447254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.447507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.447542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.447758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.447794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.447976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.448010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.448284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.448319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.448620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.448655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.448913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.448947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.449161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.449195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.449454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.449487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.449716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.449751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.449997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.450030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.450229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.450262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.450517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.450552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.450875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.450921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.451245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.451278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.451529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.451564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.451711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.451746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.451951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.451983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.452185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.452217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.452526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.452560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.452864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.452900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.841 qpair failed and we were unable to recover it. 00:28:03.841 [2024-10-17 19:35:27.453178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.841 [2024-10-17 19:35:27.453211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.453489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.453523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.453663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.453697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.453904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.453939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.454201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.454234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.454529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.454564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.454860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.454897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.455021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.455057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.455354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.455387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.455664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.455699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.455951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.455984] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.456239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.456273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.456455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.456490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.456742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.456783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.456918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.456952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.457088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.457121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.457317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.457349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.457537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.457570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.457811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.457857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.458149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.458191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.458404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.458439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.458639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.458675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.458959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.458994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.459288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.459322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.459522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.459556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.459836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.459872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.460168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.460202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.460403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.460438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.460644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.460681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.460819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.460854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.461050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.461084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.461372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.461405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.461588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.461632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.461832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.461866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.462077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.462111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.842 [2024-10-17 19:35:27.462382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.842 [2024-10-17 19:35:27.462417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.842 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.462621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.462655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.462991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.463025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.463245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.463280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.463550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.463585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.463788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.463822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.463973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.464008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.464280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.464314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.464562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.464596] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.464878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.464913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.465016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.465052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.465198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.465231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.465476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.465510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.465640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.465676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.465920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.465955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.466087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.466119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.466312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.466345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.466523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.466558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.466862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.466904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.467050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.467084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.467277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.467310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.467493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.467526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.467753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.467788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.467971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.468006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.468182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.468215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.468406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.468439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.468685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.468722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.468968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.469002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.469131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.469164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.469293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.469327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.843 [2024-10-17 19:35:27.469616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.843 [2024-10-17 19:35:27.469651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.843 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.469842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.469876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.470063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.470098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.470306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.470338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.470535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.470569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.470866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.470912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.471229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.471302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.471507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.471547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.471788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.471823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.472014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.472049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.472227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.472261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.472511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.472544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.472831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.472867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.473140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.473192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.473393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.473426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.473652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.473692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.473897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.473931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.474138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.474171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.474314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.474347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.474620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.474655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.474899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.474932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.475220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.475253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.475478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.475512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.475835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.475872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.476065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.476107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.476355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.476388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.476636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.476671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.476883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.476916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.477176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.477209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.477339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.477374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.844 [2024-10-17 19:35:27.477596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.844 [2024-10-17 19:35:27.477656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.844 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.477793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.477827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.478072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.478105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.478285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.478320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.478538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.478572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.478772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.478810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.479095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.479132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.479318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.479352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.479479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.479512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.479723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.479757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.480028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.480063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.480312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.480347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.480597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.480641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.480776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.480808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.481057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.481090] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.481224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.481257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.481451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.481485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.481623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.481657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.481841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.481875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.482135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.482168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.482428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.482461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.482575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.482616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.482797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.482830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.483034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.483066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.483189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.483220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.483422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.483462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.483598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.483641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.483771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.483802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.484070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.484105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.484300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.484333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.484468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.484501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.484622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.484657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.484926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.484961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.485147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.485181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.485370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.485404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.485535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.485568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.485843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.845 [2024-10-17 19:35:27.485880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.845 qpair failed and we were unable to recover it. 00:28:03.845 [2024-10-17 19:35:27.486076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.486110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.486336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.486370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.486573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.486616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.486812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.486847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.487065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.487273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.487430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.487577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.487753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.487981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.488015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.488271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.488303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.488503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.488536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.488728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.488763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.488960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.488993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.489285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.489320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.489457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.489501] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.489708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.489742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.490021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.490055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.490247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.490280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.490477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.490511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.490722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.490759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.490948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.490982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.491224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.491257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.491527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.491560] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.491787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.491823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.492041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.492074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.492268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.492302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.492491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.492523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.492774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.492808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.493083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.493117] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.493375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.493408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.493658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.493692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.493879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.493912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.494047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.846 [2024-10-17 19:35:27.494079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.846 qpair failed and we were unable to recover it. 00:28:03.846 [2024-10-17 19:35:27.494274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.494308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.494496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.494529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.494711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.494745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.495008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.495043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.495284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.495318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.495524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.495558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.495823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.495859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.496126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.496159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.496354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.496393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.496636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.496671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.496917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.496950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.497258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.497291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.497480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.497513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.497747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.497783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.498026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.498059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.498319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.498354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.498649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.498683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.498945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.498978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.499270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.499303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.499488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.499521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.499704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.499737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.499887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.499920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.500192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.500228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.500406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.500441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.500731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.500765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.500971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.501003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.501250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.501282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.501399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.501432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.501572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.501614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.501803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.501836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.502041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.502074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.847 qpair failed and we were unable to recover it. 00:28:03.847 [2024-10-17 19:35:27.502287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.847 [2024-10-17 19:35:27.502318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.502427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.502460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.502652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.502685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.502896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.502928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.503134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.503174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.503362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.503395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.503614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.503647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.503764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.503797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.503926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.503960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.504088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.504119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.504251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.504285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.504415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.504447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.504571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.504614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.504793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.504827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.505001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.505034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.505151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.505185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.505379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.505415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.505589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.505635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.505815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.505848] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.506031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.506065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.506275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.506309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.506488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.506521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.506761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.506795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.507963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.507998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.508126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.508159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.848 [2024-10-17 19:35:27.508363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.848 [2024-10-17 19:35:27.508396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.848 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.508538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.508584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.508803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.508840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.509130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.509164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.509351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.509383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.509568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.509613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.509798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.509831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.509946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.509978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.510174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.510207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.510329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.510361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.510633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.510668] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.510800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.510832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.510950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.510983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.511200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.511233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.511534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.511577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.511857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.511892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.512087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.512119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.512323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:03.849 [2024-10-17 19:35:27.512323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.512356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.512623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.512659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.512841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.512873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.513048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.513081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.513289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.513323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.513585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.513628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.513816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.513850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.514030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.514062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.514352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.514385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.514641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.514675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.514902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.514934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.515121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.515154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.515449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.515483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.515666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.515702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.515968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.849 [2024-10-17 19:35:27.516003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.849 qpair failed and we were unable to recover it. 00:28:03.849 [2024-10-17 19:35:27.516283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.516317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.516661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.516698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.516973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.517007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.517196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.517229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.517412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.517446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.517634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.517669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.517956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.517991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.518237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.518270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.518449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.518482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.518672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.518711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.518855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.518890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.519176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.519212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.519470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.519504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.519701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.519736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.519994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.520029] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.520219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.520252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.520496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.520529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.520730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.520765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.520966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.520999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.521177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.521212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.521458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.521492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.521826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.521862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.522128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.522169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.522322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.522356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.522492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.522528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.522749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.522787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.522999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.523034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.523215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.523250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.523427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.523462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.523659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.523697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.523946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.523981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.524270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.524304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.524519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.524554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.524759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.850 [2024-10-17 19:35:27.524794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.850 qpair failed and we were unable to recover it. 00:28:03.850 [2024-10-17 19:35:27.524969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.525002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.525270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.525303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.525445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.525478] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.525738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.525775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.525969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.526002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.526259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.526292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.526589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.526637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.526818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.526852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.527045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.527079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.527262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.527297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.527483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.527517] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.527706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.527740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.527918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.527950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.528080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.528111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.528365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.528398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.528644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.528677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.528821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.528855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.529052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.529086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.529373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.529405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.529588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.529646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.529826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.529858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.530099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.530131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.530394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.530428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.530669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.530704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.530910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.530944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.531132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.531164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.531432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.531465] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.531598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.531642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.531833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.531872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.532090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.532123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.532374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.532406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.532620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.532653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.532849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.532882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.533068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.533100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.533286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.533320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.533564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.533597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.851 qpair failed and we were unable to recover it. 00:28:03.851 [2024-10-17 19:35:27.533894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.851 [2024-10-17 19:35:27.533928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.534187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.534222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.534417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.534448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.534650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.534686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.534864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.534897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.535183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.535216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.535410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.535442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.535652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.535686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.535948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.535981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.536248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.536281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.536514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.536548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.536800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.536833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.537066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.537099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.537342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.537376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.537651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.537687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.537880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.537913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.538178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.538212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.538457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.538491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.538689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.538725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.538994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.539028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.539232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.539265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.539393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.539425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.539665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.539698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.539876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.539908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.540067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.540101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.540365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.540397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.540667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.540701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.540968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.541000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.541195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.541228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.541496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.541529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.541774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.852 [2024-10-17 19:35:27.541809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.852 qpair failed and we were unable to recover it. 00:28:03.852 [2024-10-17 19:35:27.542005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.542037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.542326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.542365] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.542620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.542654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.542867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.542899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.543039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.543071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.543177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.543210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.543321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.543353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.543547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.543580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.543840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.543873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.544062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.544096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.544393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.544425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.544685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.544719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.544986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.545019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.545258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.545291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.545501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.545535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.545740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.545775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.545963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.545996] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.546174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.546207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.546413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.546446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.546641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.546675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.546872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.546905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.547083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.547116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.547398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.547430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.547672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.547707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.547880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.547913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.548194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.548227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.548496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.548529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.548729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.548763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.549011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.549043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.549282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.549315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.549558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.549591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.549859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.549891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.550076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.550108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.550316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.550349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.550541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.550573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.550759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.550792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.853 qpair failed and we were unable to recover it. 00:28:03.853 [2024-10-17 19:35:27.550978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.853 [2024-10-17 19:35:27.551010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.551186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.551218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.551417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.551451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.551750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.551786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.551998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.552034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.552287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.552325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.552500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.552533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.552785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.552822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.553020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.553054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.553265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.553299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.553542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.553575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.553771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.553805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.553994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.554028] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.554288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.554321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.554429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.854 [2024-10-17 19:35:27.554466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.854 [2024-10-17 19:35:27.554473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.854 [2024-10-17 19:35:27.554480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.854 [2024-10-17 19:35:27.554486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.854 [2024-10-17 19:35:27.554585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.554626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.554759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.554790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.554977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.555010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.555275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.555308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.555516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.555548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.555696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.555730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.555971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.556003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.556176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.556208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.854 [2024-10-17 19:35:27.556140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.556247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:03.854 [2024-10-17 19:35:27.556377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:03.854 [2024-10-17 19:35:27.556444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.556490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:03.854 [2024-10-17 19:35:27.556378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.556757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.556814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.557080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.557115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.557364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.557397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.557725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.854 [2024-10-17 19:35:27.557760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.854 qpair failed and we were unable to recover it. 00:28:03.854 [2024-10-17 19:35:27.558007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.558040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.558185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.558218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.558489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.558522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.558710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.558746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.559023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.559057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.559251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.559283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.559534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.559568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.559773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.559810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.560001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.560035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.560304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.560337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.560628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.560663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.560924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.560956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.561139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.561172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.561413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.561445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.561636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.561670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.561843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.561881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.562092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.562124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.562342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.562375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.562546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.562578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.562768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.562802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.563000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.563033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.563303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.563336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.563510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.563543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.563690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.563736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.563925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.563958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.564200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.564233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.564470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.564504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.564677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.564712] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.565011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.565045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.565236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.565270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.565544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.565577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.565826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.565860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.566113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.566147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.855 [2024-10-17 19:35:27.566362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.855 [2024-10-17 19:35:27.566395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.855 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.566584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.566625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.566892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.566925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.567103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.567138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.567350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.567384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.567557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.567591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.567730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.567763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.567964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.567997] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.568281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.568315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.568521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.568555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.568808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.568843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.569114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.569147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.569338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.569372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.569557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.569592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.569798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.569832] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.570017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.570050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.570312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.570346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.570554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.570587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.570891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.570926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.571051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.571084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.571280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.571314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.571581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.571626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.571907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.571948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.572159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.572193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.572433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.572468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.572720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.572755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.572942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.572978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.573219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.573254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.573447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.573484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.573757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.573793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.574006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.574044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.574295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.574329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.574536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.574570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.574850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.574910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.575108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.575142] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.575411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.575444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.575647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.575684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.856 [2024-10-17 19:35:27.575970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.856 [2024-10-17 19:35:27.576002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.856 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.576284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.576316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.576591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.576636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.576824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.576857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.577037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.577072] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.577351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.577386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.577576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.577621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.577815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.577847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.578024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.578057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.578234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.578266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.578458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.578491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.578683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.578718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.578974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.579021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.579216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.579249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.579461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.579494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.579637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.579672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.579877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.579910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.580165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.580198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.580440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.580474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.580613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.580647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.580824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.580856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.581068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.581102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.581302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.581335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.581529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.581561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.581775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.581810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.582065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.582099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.582381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.582415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.582617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.582650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.582844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.582877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.583013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.583046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.583176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.583209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.857 [2024-10-17 19:35:27.583488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.857 [2024-10-17 19:35:27.583522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.857 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.583701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.583734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.583909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.583942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.584192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.584226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.584414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.584447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.584619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.584654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.584853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.584887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.585150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.585184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.585361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.585401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.585669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.585706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.585935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.585968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.586154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.586187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.586475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.586509] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.586699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.586733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.586912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.586946] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.587137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.587170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.587382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.587415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.587694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.587729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.587930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.587963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.588216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.588250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.588427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.588461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.588637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.588671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.588925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.588958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.589198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.589231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.589483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.589516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.589762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.589796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.589988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.590021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.590212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.590247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.590433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.590467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.590738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.590774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.591051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.591085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.591334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.591368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.591513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.591547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.591695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.591729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.591964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.858 [2024-10-17 19:35:27.591998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:03.858 qpair failed and we were unable to recover it. 00:28:03.858 [2024-10-17 19:35:27.592122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.592163] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.592347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.592383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.592671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.592708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.592971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.593005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.593246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.593279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.593518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.593551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.593746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.593779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.593952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.593985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.594275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.594307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.594577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.594618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.594741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.594774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.595035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.595068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.595273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.595306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.595569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.595609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.595759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.595809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.595968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.596006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.596135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.596168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.596467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.596502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.596705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.596741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.596938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.596970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.597156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.597190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.597376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.597408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.597591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.597635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.597781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.597814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.597961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.597994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.598282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.598315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.598509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.598541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.598738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.598781] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.599020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.599054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.599175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.599209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.599420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.599453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.599647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.599683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.599817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.599851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.600037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.600071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.600346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.600379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.600641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.600676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.600811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.600845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.601114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.601149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.601414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.131 [2024-10-17 19:35:27.601446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.131 qpair failed and we were unable to recover it. 00:28:04.131 [2024-10-17 19:35:27.601671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.601706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.601844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.601877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.602147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.602180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.602304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.602338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.602531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.602566] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.602844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.602877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.603058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.603091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.603285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.603318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.603522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.603553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.603777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.603811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.604000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.604032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.604228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.604261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.604448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.604481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.604683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.604719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.604930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.604963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.605200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.605250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.605503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.605538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.605748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.605790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.606042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.606075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.606312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.606345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.606587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.606632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.606739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.606772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.606961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.606993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.607129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.607162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.607445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.607479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.607771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.607806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.608042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.608076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.608298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.608332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.608622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.608657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.608866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.608900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.609142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.609175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.609460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.609495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.609679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.609715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.609958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.609990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.610270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.610305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.610550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.610585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.610796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.610829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.611015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.611048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.611235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.611270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.611509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.611543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.132 qpair failed and we were unable to recover it. 00:28:04.132 [2024-10-17 19:35:27.611741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.132 [2024-10-17 19:35:27.611777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.611953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.611988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.612264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.612303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.612486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.612520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.612767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.612805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.612987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.613021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.613208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.613241] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.613371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.613404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.613520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.613553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.613749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.613783] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.613977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.614183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.614330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.614496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.614651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.614875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.614908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.615068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.615227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.615438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.615654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.615810] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.615987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.616021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.616213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.616245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.616487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.616520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.616767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.616802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.616992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.617024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.617141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.617174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.617312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.617345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.617632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.617667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.617899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.617938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.618130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.618164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.618403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.618436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.618621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.618654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.618769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.618802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.619004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.619037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.619299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.619331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.619524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.619557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.619773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.619807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.619926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.619959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.620146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.620180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.133 [2024-10-17 19:35:27.620295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.133 [2024-10-17 19:35:27.620328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.133 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.620571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.620615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.620734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.620767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.620989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.621037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.621160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.621194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.621434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.621467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.621647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.621683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.621907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.621940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.622071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.622105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.622287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.622320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.622451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.622483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.622674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.622709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.623966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.623999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.624134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.624166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.624357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.624391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.624496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.624530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.624725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.624759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.624889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.624923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.625098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.625132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.625262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.625295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.625540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.625573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.625710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.625745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.625938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.625971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.626163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.626195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.626324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.626358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.626556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.626590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.626800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.626833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.626944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.626978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.627176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.627209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.627314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.627348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.627546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.627579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.627732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.627765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.628004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.628038] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.628223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.628257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.628379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.628412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.628627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.134 [2024-10-17 19:35:27.628661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.134 qpair failed and we were unable to recover it. 00:28:04.134 [2024-10-17 19:35:27.628848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.628883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.629083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.629115] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.629343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.629384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.629538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.629589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.629723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.629761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.629889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.629921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.630036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.630069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.630269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.630304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.630424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.630457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.630590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.630636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.630816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.630849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.631028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.631061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.631260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.631294] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.631488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.631522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.631659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.631696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.631892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.631925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.632121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.632153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.632363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.632397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.632647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.632682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.632811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.632845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.632972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.633005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.633182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.633216] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.633325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.633359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.633635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.633670] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.633794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.633828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.634052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.634084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.634267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.634300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.634467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.634499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.634688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.634722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.634907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.634940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.635124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.635157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.635417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.135 [2024-10-17 19:35:27.635450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.135 qpair failed and we were unable to recover it. 00:28:04.135 [2024-10-17 19:35:27.635709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.635743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.635945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.635977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.636158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.636189] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.636312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.636345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.636535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.636568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.636846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.636885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.637078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.637111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.637238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.637272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.637396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.637430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.637620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.637656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.637899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.637942] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.638120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.638155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.638337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.638370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.638477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.638510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.638646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.638680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.638957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.638991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.639180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.639214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.639340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.639372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.639558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.639592] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.639786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.639820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.640002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.640036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.640211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.640245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.640449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.640483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.640656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.640690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.640876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.640910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641060] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641364] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.641884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.641919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.642040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.642073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.642263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.642296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.642419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.642453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.642697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.642733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.642920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.642954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.643147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.643180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.643353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.643399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.643584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.643626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.643818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.643852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.136 [2024-10-17 19:35:27.644000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.136 [2024-10-17 19:35:27.644034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.136 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.644227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.644260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.644530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.644564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.644700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.644734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.644908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.644941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.645246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.645281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.645464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.645496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.645690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.645725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.645933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.645965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.646171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.646204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.646448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.646481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.646731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.646767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.646898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.646932] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.647175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.647209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.647488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.647522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.647707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.647740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.647847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.647880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.648066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.648100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.648227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.648260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.648426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.648460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.648673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.648708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:04.137 [2024-10-17 19:35:27.648885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.648921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.649105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.649138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:04.137 [2024-10-17 19:35:27.649340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.649374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.649518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.649552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:04.137 [2024-10-17 19:35:27.649746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.649780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:04.137 [2024-10-17 19:35:27.649958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.649993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.650118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.650152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.137 [2024-10-17 19:35:27.650350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.650384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.650561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.650593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.650796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.650833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.651125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.651158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.651284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.651317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.651509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.651541] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.651678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.651714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.651886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.651917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.652033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.652065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.652347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.652381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.652506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.652538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.137 qpair failed and we were unable to recover it. 00:28:04.137 [2024-10-17 19:35:27.652679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.137 [2024-10-17 19:35:27.652713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.652931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.652964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.653226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.653258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.653433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.653467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.653682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.653718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.653927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.653959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.654154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.654190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.654404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.654439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.654639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.654672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.654927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.654960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.655075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.655111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.655259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.655290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.655472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.655505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.655701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.655735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.655905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.655938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.656121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.656153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.656369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.656402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.656667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.656700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.656816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.656851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.657052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.657085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.657300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.657334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.657515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.657549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.657828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.657861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.657998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.658151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.658375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.658581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.658759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.658922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.658954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.659198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.659231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.659418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.659451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.659574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.659617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.659745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.659778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.659970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.660122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.660274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.660495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.660716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.660886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.660920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.661069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.661105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.661281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.138 [2024-10-17 19:35:27.661316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.138 qpair failed and we were unable to recover it. 00:28:04.138 [2024-10-17 19:35:27.661449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.661481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.661698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.661734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.661977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.662202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.662361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.662580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.662805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.662947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.662981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.663119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.663152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.663259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.663291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.663420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.663458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.663647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.663680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.663871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.663903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.664036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.664068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.664216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.664250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.664356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.664388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.664581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.664624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.664801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.664834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665388] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.665859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.665892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.666070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.666105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.666225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.666257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.666443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.666476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.666679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.666714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.666831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.666863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.667092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.667255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.667420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.667574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.667801] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.667983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.668014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.668143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.668177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.139 [2024-10-17 19:35:27.668394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.139 [2024-10-17 19:35:27.668426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.139 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.668539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.668572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.668782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.668817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.668927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.668959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.669931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.669963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.670114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.670147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.670325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.670358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.670554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.670587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.670713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.670746] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.670880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.670914] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671069] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671238] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.671948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.671982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.672101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.672134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.672244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.672277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.672403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.672438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.672644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.672678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.672810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.672843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.673031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.673065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.673252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.673285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.673422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.673456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.673674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.673710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.673845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.673878] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.674049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.674083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.674270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.674304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.674419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.674454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.674639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.674673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.674787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.674822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.675045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.675193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.675354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.675599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.675794] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.140 qpair failed and we were unable to recover it. 00:28:04.140 [2024-10-17 19:35:27.675970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.140 [2024-10-17 19:35:27.676002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.676252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.676284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.676414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.676446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.676639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.676674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.676862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.676895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677414] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.677905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.677938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.678174] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.678317] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.678462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.678681] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.678835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.678976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.679872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.679905] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.680021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.680055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.680234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.680266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.680510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.680544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.680688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.680721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.680824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.680858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.681870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.681981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.682136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.682352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.682556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.682720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.682888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.682919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.683116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.141 [2024-10-17 19:35:27.683150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.141 qpair failed and we were unable to recover it. 00:28:04.141 [2024-10-17 19:35:27.683297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.683337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.683450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.683484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.683621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.683655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.683807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.683840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.684058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.684266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.684490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.684710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.684875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.684996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.685030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.685147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.685179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.685361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.685393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.142 [2024-10-17 19:35:27.685587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.685632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.685807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.685845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.142 [2024-10-17 19:35:27.686032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.686067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.686184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.686219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.142 [2024-10-17 19:35:27.686400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.686433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.142 [2024-10-17 19:35:27.686642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.686680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.686788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.686823] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687202] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.687932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.687964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.688089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.688128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.688254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.688286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.688399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.688432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.688655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.688690] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.688883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.688915] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.689071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.689103] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.689229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.689261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.689374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.689406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.689657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.689692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.689871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.689904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.690039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.690071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.690191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.690224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.690352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.690385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.142 [2024-10-17 19:35:27.690585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.142 [2024-10-17 19:35:27.690632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.142 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.690779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.690814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.691893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.691927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.692143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.692175] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.692415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.692447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.692561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.692594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.692808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.692842] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.692950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.692983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.693201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.693234] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.693413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.693446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.693653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.693687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.693797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.693831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.693959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.693993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.694165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.694198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.694374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.694407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.694534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.694567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.694762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.694799] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.694982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.695134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695165] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.695272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.695422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.695641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.695860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.695897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.696052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.696085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.696197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.696230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.696463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.696495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.696616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.696651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.696792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.696827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697643] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.697914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.143 [2024-10-17 19:35:27.697948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.143 qpair failed and we were unable to recover it. 00:28:04.143 [2024-10-17 19:35:27.698057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.698095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.698292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.698325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.698466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.698498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.698763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.698797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.698980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.699851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.699977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.700122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.700332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.700552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.700773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.700908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.700940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.701072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.701107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.701301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.701332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.701438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.701470] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.701581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.701623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.701818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.701850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.702091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.702123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.702244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.702276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.702425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.702456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.702561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.702593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.702804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.702836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.703046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.703349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.703492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.703698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8508000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.703856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.703970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.704185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.704402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.704629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.704782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.704923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.704955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.705133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.144 [2024-10-17 19:35:27.705166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.144 qpair failed and we were unable to recover it. 00:28:04.144 [2024-10-17 19:35:27.705278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.705311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.705448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.705480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.705637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.705671] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.705870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.705903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.706118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.706289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.706437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.706667] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.706815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.706992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.707024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.707147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.707180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.707359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.707392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.707657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.707692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.707805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.707838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.707975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.708180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.708403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.708629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.708791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708825] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.708948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.708980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.709161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.709194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.709368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.709400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.709609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.709644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.709759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.709793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.709987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.710019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.710133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.710166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.710391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.710425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.710624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.710658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.710839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.710871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.710993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.711026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.711222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.711254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.711449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.711487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.711666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.711702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.711873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.711907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.712096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.712128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.712248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.712282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.712465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.712498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.712614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.712649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.712773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.712807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.713024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.713057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.145 [2024-10-17 19:35:27.713248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.145 [2024-10-17 19:35:27.713281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.145 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.713410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.713445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.713634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.713666] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.713949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.713983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.714173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.714207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.714344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.714377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.714528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.714561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.714751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.714786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.714991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.715024] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.715140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.715172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.715427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.715461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.715653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.715688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.715808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.715841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.716022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.716055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.716194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.716227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.716411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.716446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.716624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.716659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.716801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.716835] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.717158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.717193] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.717439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.717473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.717647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.717680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.717969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.718132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.718351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718385] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.718596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718638] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.718773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.718932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.718966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.719075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.719108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.719290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.719323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.719621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.719654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.719855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.719887] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.720018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.720058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 Malloc0 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.720312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.720345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.720526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.720559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.720726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.720760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.146 [2024-10-17 19:35:27.721003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.721035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:04.146 [2024-10-17 19:35:27.721223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.721258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.721383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.721416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.146 [2024-10-17 19:35:27.721549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.721581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.146 [2024-10-17 19:35:27.721741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.146 [2024-10-17 19:35:27.721774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.146 qpair failed and we were unable to recover it. 00:28:04.147 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.147 [2024-10-17 19:35:27.721959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.721992] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.722107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.722141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.722329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.722362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.722540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.722574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.722783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.722818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.722954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.722987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.723127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.723159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.723344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.723377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.723555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.723588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.723716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.723750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.724011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.724044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.724240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.724273] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.724459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.724493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.724734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.724768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.724978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.725011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.725212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.725244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.725469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.725512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.725638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.725673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.725894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.725926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.726141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.726173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.726330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.726363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.726541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.726574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.726695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.726727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.726896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.726927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.727076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.727109] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.727289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.727320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.727495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.727528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.727660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.147 [2024-10-17 19:35:27.727664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.727698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.727877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.727911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.728054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.728092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.728282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.728315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.728422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.728454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.728634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.728669] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.728941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.728973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.729168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.729200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.729457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.729489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.729737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.729772] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.730014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.730045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.730286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.730319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.147 [2024-10-17 19:35:27.730510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.147 [2024-10-17 19:35:27.730543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.147 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.730682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.730714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.730893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.730927] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.731121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.731155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.731337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.731370] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.731487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.731519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.731730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.731763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.731952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.731983] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.732153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.732185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.732298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.732332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.732507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.732539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.732807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.732840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.732949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.732980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.733296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.733328] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.733533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.733567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.733763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.733796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.734082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.734114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.734358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.734391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.734569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.734613] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.734794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.734826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.734961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.734995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.735121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.735153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.735282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.735313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.735582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.735627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.735820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.735853] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.736055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.736087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.736226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.736261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.148 [2024-10-17 19:35:27.736370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.736403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.736534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.736567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.148 [2024-10-17 19:35:27.736820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.736854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb48ca0 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.737015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.148 [2024-10-17 19:35:27.737075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.737320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.148 [2024-10-17 19:35:27.737358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.737624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.737657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.737849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.737883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.738073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.738106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.148 qpair failed and we were unable to recover it. 00:28:04.148 [2024-10-17 19:35:27.738307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.148 [2024-10-17 19:35:27.738340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.738452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.738485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.738597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.738644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.738853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.738888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.739087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.739120] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.739240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.739272] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.739472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.739505] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.739677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.739710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.739925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.739957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.740136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.740169] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.740288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.740319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.740530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.740564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.740780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.740814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.741009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.741040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.741162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.741194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.741383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.741417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.741612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.741646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.741888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.741920] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.742101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.742134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.742317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.742349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.742540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.742572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.742711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.742744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.742966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.742999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.743212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.743245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.743451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.743484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.743618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.743653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.743847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.743879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.744068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.744102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.744342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.149 [2024-10-17 19:35:27.744375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.744570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.744614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.744757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.744790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.745081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.745113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.149 [2024-10-17 19:35:27.745410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.745450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.745637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.745672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.745800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.745834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.746020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.746052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.746199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.746232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.746359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.746393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.746597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.149 [2024-10-17 19:35:27.746650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.149 qpair failed and we were unable to recover it. 00:28:04.149 [2024-10-17 19:35:27.746794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.746827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.747846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.747881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.748087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.748121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.748396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.748428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.748540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.748574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.748699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.748732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.748860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.748892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.749095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.749128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.749323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.749356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.749571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.749617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.749846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.749879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.750092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.750125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.750258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.750289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.750413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.750444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.750685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.750719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8500000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.750865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.750900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.751097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.751130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.751352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.751384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.751567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.751609] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.751733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.751767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.751954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.751987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.752162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.752195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.150 [2024-10-17 19:35:27.752380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.752413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.752525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.752559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.150 [2024-10-17 19:35:27.752827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.752862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.753034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.753068] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.150 [2024-10-17 19:35:27.753320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.753353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.150 [2024-10-17 19:35:27.753488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.753521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.753732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.753767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.753908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.753941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.754187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.754218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.754344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.754378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.754620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.754653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.150 [2024-10-17 19:35:27.754892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.150 [2024-10-17 19:35:27.754924] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.150 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.755192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.151 [2024-10-17 19:35:27.755227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.755366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.151 [2024-10-17 19:35:27.755400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.755658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:04.151 [2024-10-17 19:35:27.755692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f84fc000b90 with addr=10.0.0.2, port=4420 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.755841] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.151 [2024-10-17 19:35:27.758322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.758443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.758488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.758515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.758536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.758586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:04.151 [2024-10-17 19:35:27.768239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.768344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.768385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.768406] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.151 [2024-10-17 19:35:27.768426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.768471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 19:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2256742 00:28:04.151 [2024-10-17 19:35:27.778266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.778349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.778376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.778392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.778407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.778439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.788179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.788243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.788263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.788272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.788280] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.788300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.798262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.798333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.798350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.798357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.798363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.798377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.808248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.808298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.808311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.808317] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.808323] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.808338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.818281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.818343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.818358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.818364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.818370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.818385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.828323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.828387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.828401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.828408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.828414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.828428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.838383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.838458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.838472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.838479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.838485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.838499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.848375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.848463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.848477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.848483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.848490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.848503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.858401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.858454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.858467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.858474] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.858480] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.858495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.868428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.151 [2024-10-17 19:35:27.868482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.151 [2024-10-17 19:35:27.868495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.151 [2024-10-17 19:35:27.868502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.151 [2024-10-17 19:35:27.868508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.151 [2024-10-17 19:35:27.868523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.151 qpair failed and we were unable to recover it. 00:28:04.151 [2024-10-17 19:35:27.878370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.152 [2024-10-17 19:35:27.878424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.152 [2024-10-17 19:35:27.878438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.152 [2024-10-17 19:35:27.878445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.152 [2024-10-17 19:35:27.878451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.152 [2024-10-17 19:35:27.878465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.152 qpair failed and we were unable to recover it. 00:28:04.152 [2024-10-17 19:35:27.888522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.152 [2024-10-17 19:35:27.888575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.152 [2024-10-17 19:35:27.888592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.152 [2024-10-17 19:35:27.888598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.152 [2024-10-17 19:35:27.888611] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.152 [2024-10-17 19:35:27.888625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.152 qpair failed and we were unable to recover it. 00:28:04.152 [2024-10-17 19:35:27.898502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.152 [2024-10-17 19:35:27.898597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.152 [2024-10-17 19:35:27.898617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.152 [2024-10-17 19:35:27.898625] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.152 [2024-10-17 19:35:27.898632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.152 [2024-10-17 19:35:27.898647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.152 qpair failed and we were unable to recover it. 00:28:04.412 [2024-10-17 19:35:27.908536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.412 [2024-10-17 19:35:27.908611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.412 [2024-10-17 19:35:27.908625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.412 [2024-10-17 19:35:27.908633] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.412 [2024-10-17 19:35:27.908639] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.412 [2024-10-17 19:35:27.908654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.412 qpair failed and we were unable to recover it. 00:28:04.412 [2024-10-17 19:35:27.918562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.412 [2024-10-17 19:35:27.918634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.412 [2024-10-17 19:35:27.918648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.412 [2024-10-17 19:35:27.918655] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.412 [2024-10-17 19:35:27.918661] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.412 [2024-10-17 19:35:27.918676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.412 qpair failed and we were unable to recover it. 00:28:04.412 [2024-10-17 19:35:27.928606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.412 [2024-10-17 19:35:27.928668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.412 [2024-10-17 19:35:27.928681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.412 [2024-10-17 19:35:27.928688] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.412 [2024-10-17 19:35:27.928694] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.412 [2024-10-17 19:35:27.928712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.412 qpair failed and we were unable to recover it. 00:28:04.412 [2024-10-17 19:35:27.938642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.412 [2024-10-17 19:35:27.938694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.412 [2024-10-17 19:35:27.938707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.938714] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.938720] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.938734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.948692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.948746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.948760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.948767] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.948772] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.948786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.958616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.958673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.958686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.958692] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.958698] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.958712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.968715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.968781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.968795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.968801] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.968807] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.968821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.978729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.978782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.978799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.978806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.978812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.978827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.988777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.988835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.988849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.988856] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.988861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.988876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:27.998801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:27.998863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:27.998877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:27.998884] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:27.998889] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:27.998904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.008750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.008800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.008814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.008820] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.008826] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.008840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.018851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.018903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.018916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.018923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.018932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.018947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.028888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.028941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.028954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.028961] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.028967] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.028982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.038916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.038966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.038980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.038987] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.038993] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.039008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.048935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.048997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.049011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.049017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.049023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.049038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.058964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.059018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.059031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.059038] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.059044] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.059058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.069003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.069063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.069077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.069083] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.069089] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.413 [2024-10-17 19:35:28.069103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.413 qpair failed and we were unable to recover it. 00:28:04.413 [2024-10-17 19:35:28.079022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.413 [2024-10-17 19:35:28.079078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.413 [2024-10-17 19:35:28.079091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.413 [2024-10-17 19:35:28.079098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.413 [2024-10-17 19:35:28.079104] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.079118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.089040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.089092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.089106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.089112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.089119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.089133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.099064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.099120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.099133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.099140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.099146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.099160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.109108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.109193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.109207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.109213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.109222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.109237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.119160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.119213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.119227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.119233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.119239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.119254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.129199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.129254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.129267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.129273] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.129279] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.129294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.139108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.139156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.139171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.139179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.139186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.139201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.149172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.149252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.149265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.149272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.149278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.149292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.159255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.159323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.159337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.159344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.159350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.159365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.169199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.169295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.169308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.169314] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.169320] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.169334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.179298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.179351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.179364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.179370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.179376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.179389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.414 [2024-10-17 19:35:28.189349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.414 [2024-10-17 19:35:28.189451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.414 [2024-10-17 19:35:28.189463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.414 [2024-10-17 19:35:28.189470] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.414 [2024-10-17 19:35:28.189476] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.414 [2024-10-17 19:35:28.189489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.414 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.199353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.199409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.199423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.199433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.199439] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.199453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.209359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.209438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.209451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.209458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.209464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.209478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.219342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.219393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.219407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.219413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.219419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.219433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.229475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.229529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.229543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.229550] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.229555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.229569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.239456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.239510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.239523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.239530] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.239536] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.239550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.249528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.249591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.249610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.249616] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.249622] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.249636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.259538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.259616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.259630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.259637] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.259643] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.259657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.269531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.269611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.269624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.269630] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.269636] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.269651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.279567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.279631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.279645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.279651] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.279657] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.279671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.289599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.289680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.289693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.289703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.289709] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.289723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.299584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.299642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.299656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.299663] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.299669] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.299683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.309598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.309681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.309696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.309703] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.309710] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.309725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.319676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.319734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.319748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.319755] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.319761] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.675 [2024-10-17 19:35:28.319776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.675 qpair failed and we were unable to recover it. 00:28:04.675 [2024-10-17 19:35:28.329673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.675 [2024-10-17 19:35:28.329749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.675 [2024-10-17 19:35:28.329763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.675 [2024-10-17 19:35:28.329769] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.675 [2024-10-17 19:35:28.329775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.329790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.339749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.340009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.340025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.340032] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.340038] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.340054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.349807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.349867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.349881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.349888] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.349894] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.349909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.359748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.359803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.359817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.359824] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.359830] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.359843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.369835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.369890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.369903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.369909] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.369915] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.369930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.379836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.379888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.379907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.379914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.379920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.379934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.389923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.389982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.389995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.390002] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.390008] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.390022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.399855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.399927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.399940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.399946] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.399952] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.399967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.409952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.410008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.410021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.410028] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.410033] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.410048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.419953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.420004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.420018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.420025] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.420031] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.420048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.429941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.429998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.430011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.430018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.430024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.430037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.439961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.440026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.440041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.440048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.440054] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.440068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.676 [2024-10-17 19:35:28.450086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.676 [2024-10-17 19:35:28.450143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.676 [2024-10-17 19:35:28.450157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.676 [2024-10-17 19:35:28.450164] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.676 [2024-10-17 19:35:28.450169] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.676 [2024-10-17 19:35:28.450184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.676 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.460139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.460196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.460210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.460217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.460224] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.460239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.470131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.470182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.470198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.470204] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.470210] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.470224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.480095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.480177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.480191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.480197] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.480203] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.480217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.490152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.490218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.490232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.490239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.490245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.490260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.500146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.500197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.500211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.500217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.500222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.500236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.510182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.510235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.510248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.510255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.510261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.510278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.520230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.520290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.520314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.520321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.520327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.520345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.530221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.530274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.530288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.530296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.530302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.530317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.540283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.540331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.540345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.540351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.540357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.540372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.550282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.550337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.550351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.550358] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.550364] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.550378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.560362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.560461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.560475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.560481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.560487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.560501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.570428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.570488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.570502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.570508] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.570514] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.570529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.938 [2024-10-17 19:35:28.580457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.938 [2024-10-17 19:35:28.580508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.938 [2024-10-17 19:35:28.580522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.938 [2024-10-17 19:35:28.580528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.938 [2024-10-17 19:35:28.580534] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.938 [2024-10-17 19:35:28.580548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.938 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.590539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.590639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.590652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.590659] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.590665] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.590679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.600524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.600577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.600590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.600596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.600610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.600625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.610466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.610520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.610534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.610541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.610547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.610560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.620553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.620605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.620620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.620627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.620633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.620648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.630511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.630564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.630577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.630583] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.630589] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.630610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.640536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.640592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.640610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.640617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.640623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.640638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.650568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.650674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.650688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.650695] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.650701] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.650715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.660661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.660714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.660727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.660734] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.660739] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.660753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.670631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.670718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.670731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.670737] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.670743] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.670757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.680663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.680718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.680732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.680738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.680744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.680759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.690766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.690823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.690836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.690846] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.690852] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.690866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.700779] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.700830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.700843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.700850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.700855] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.700869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.710795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.710850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:04.939 [2024-10-17 19:35:28.710863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:04.939 [2024-10-17 19:35:28.710869] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:04.939 [2024-10-17 19:35:28.710875] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:04.939 [2024-10-17 19:35:28.710889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:04.939 qpair failed and we were unable to recover it. 00:28:04.939 [2024-10-17 19:35:28.720830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:04.939 [2024-10-17 19:35:28.720888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.200 [2024-10-17 19:35:28.720902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.200 [2024-10-17 19:35:28.720911] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.200 [2024-10-17 19:35:28.720917] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.200 [2024-10-17 19:35:28.720934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-10-17 19:35:28.730860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.200 [2024-10-17 19:35:28.730922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.200 [2024-10-17 19:35:28.730937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.200 [2024-10-17 19:35:28.730944] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.200 [2024-10-17 19:35:28.730951] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.200 [2024-10-17 19:35:28.730965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-10-17 19:35:28.740943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.200 [2024-10-17 19:35:28.740996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.200 [2024-10-17 19:35:28.741010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.200 [2024-10-17 19:35:28.741016] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.200 [2024-10-17 19:35:28.741022] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.200 [2024-10-17 19:35:28.741036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-10-17 19:35:28.750920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.200 [2024-10-17 19:35:28.750971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.200 [2024-10-17 19:35:28.750985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.200 [2024-10-17 19:35:28.750992] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.200 [2024-10-17 19:35:28.750998] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.200 [2024-10-17 19:35:28.751011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.200 qpair failed and we were unable to recover it. 00:28:05.200 [2024-10-17 19:35:28.760942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.200 [2024-10-17 19:35:28.760994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.200 [2024-10-17 19:35:28.761007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.200 [2024-10-17 19:35:28.761014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.761020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.761035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.770965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.771020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.771034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.771041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.771047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.771061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.781022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.781081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.781094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.781104] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.781111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.781125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.791032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.791090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.791103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.791110] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.791116] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.791130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.801044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.801097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.801110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.801117] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.801123] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.801137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.811098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.811151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.811164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.811171] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.811177] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.811191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.821111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.821163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.821177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.821183] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.821189] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.821203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.831154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.831209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.831222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.831228] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.831234] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.831249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.841102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.841151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.841164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.841170] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.841176] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.841190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.851195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.851251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.851264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.851271] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.851277] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.851291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.861225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.861277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.861290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.861296] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.861302] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.861316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.871255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.871311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.871331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.871337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.871343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.871358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.881307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.881389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.881402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.881408] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.881414] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.881428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.891295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.891375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.891388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.891395] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.891401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.891415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.201 qpair failed and we were unable to recover it. 00:28:05.201 [2024-10-17 19:35:28.901332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.201 [2024-10-17 19:35:28.901386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.201 [2024-10-17 19:35:28.901399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.201 [2024-10-17 19:35:28.901405] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.201 [2024-10-17 19:35:28.901411] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.201 [2024-10-17 19:35:28.901426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.911379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.911437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.911451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.911458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.911465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.911484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.921413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.921470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.921483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.921490] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.921495] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.921510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.931358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.931412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.931426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.931433] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.931438] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.931452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.941455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.941511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.941525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.941531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.941537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.941551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.951499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.951552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.951566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.951573] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.951579] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.951594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.961518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.961573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.961591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.961598] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.961607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.961621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.971545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.971594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.971611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.971617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.971623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.971638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.202 [2024-10-17 19:35:28.981506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.202 [2024-10-17 19:35:28.981557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.202 [2024-10-17 19:35:28.981572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.202 [2024-10-17 19:35:28.981579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.202 [2024-10-17 19:35:28.981585] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.202 [2024-10-17 19:35:28.981603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.202 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:28.991625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:28.991683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:28.991698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:28.991705] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:28.991711] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:28.991727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.001661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.001716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.001730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.001738] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.001744] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.001762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.011642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.011696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.011709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.011717] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.011722] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.011736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.021728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.021786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.021799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.021806] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.021812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.021826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.031748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.031815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.031829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.031835] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.031841] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.031855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.041776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.041852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.041865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.041872] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.041878] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.041892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.051809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.051901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.051919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.051926] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.051932] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.051947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.061854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.061956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.061970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.061976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.061983] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.061997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.071847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.463 [2024-10-17 19:35:29.071946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.463 [2024-10-17 19:35:29.071959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.463 [2024-10-17 19:35:29.071966] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.463 [2024-10-17 19:35:29.071972] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.463 [2024-10-17 19:35:29.071986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.463 qpair failed and we were unable to recover it. 00:28:05.463 [2024-10-17 19:35:29.081860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.081915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.081928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.081935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.081941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.081955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.091883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.091932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.091945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.091952] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.091960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.091974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.101915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.101968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.101981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.101988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.101994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.102007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.111951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.112054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.112068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.112074] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.112080] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.112094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.121972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.122044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.122057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.122064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.122070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.122084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.132049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.132102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.132116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.132122] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.132128] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.132142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.142031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.142085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.142098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.142105] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.142111] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.142125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.152117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.152173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.152186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.152192] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.152198] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.152212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.162108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.162161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.162175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.162181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.162187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.162202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.172116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.172168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.172182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.172189] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.172195] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.172209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.182182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.182234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.182247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.182253] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.182262] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.182277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.192189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.192245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.192258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.192264] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.192270] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.192283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.202213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.202268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.202281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.202287] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.202293] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.202306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.212223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.212272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.464 [2024-10-17 19:35:29.212285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.464 [2024-10-17 19:35:29.212291] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.464 [2024-10-17 19:35:29.212297] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.464 [2024-10-17 19:35:29.212311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.464 qpair failed and we were unable to recover it. 00:28:05.464 [2024-10-17 19:35:29.222270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.464 [2024-10-17 19:35:29.222324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.465 [2024-10-17 19:35:29.222338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.465 [2024-10-17 19:35:29.222345] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.465 [2024-10-17 19:35:29.222350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.465 [2024-10-17 19:35:29.222364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.465 qpair failed and we were unable to recover it. 00:28:05.465 [2024-10-17 19:35:29.232338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.465 [2024-10-17 19:35:29.232393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.465 [2024-10-17 19:35:29.232406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.465 [2024-10-17 19:35:29.232413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.465 [2024-10-17 19:35:29.232418] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.465 [2024-10-17 19:35:29.232432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.465 qpair failed and we were unable to recover it. 00:28:05.465 [2024-10-17 19:35:29.242383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.465 [2024-10-17 19:35:29.242441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.465 [2024-10-17 19:35:29.242454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.465 [2024-10-17 19:35:29.242460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.465 [2024-10-17 19:35:29.242466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.465 [2024-10-17 19:35:29.242480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.465 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.252337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.252394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.252408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.252417] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.252423] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.252437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.262385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.262466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.262479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.262486] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.262492] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.262506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.272425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.272481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.272494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.272503] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.272509] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.272524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.282448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.282504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.282518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.282524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.282530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.282544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.292478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.292575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.292589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.292596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.292607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.292622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.302518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.302570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.302584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.302590] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.302596] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.302614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.312541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.312643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.312656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.312662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.312668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.312683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.322562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.322618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.322632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.322638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.322645] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.322659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.332584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.332640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.332653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.332660] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.332666] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.332680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.342640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.342687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.342700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.342707] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.342712] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.342727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.352650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.352705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.352718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.352724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.352730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.352744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.362676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.362729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.362745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.362751] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.362757] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.362772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.372702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.372756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.726 [2024-10-17 19:35:29.372770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.726 [2024-10-17 19:35:29.372776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.726 [2024-10-17 19:35:29.372782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.726 [2024-10-17 19:35:29.372796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.726 qpair failed and we were unable to recover it. 00:28:05.726 [2024-10-17 19:35:29.382718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.726 [2024-10-17 19:35:29.382773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.382786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.382792] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.382798] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.382813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.392751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.392803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.392815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.392822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.392828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.392842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.402777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.402841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.402854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.402860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.402866] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.402880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.412803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.412857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.412870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.412876] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.412882] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.412897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.422880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.422943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.422956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.422963] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.422969] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.422984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.432862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.432927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.432941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.432947] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.432953] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.432967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.442890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.442945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.442958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.442964] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.442970] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.442984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.452916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.452969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.452985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.452991] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.452997] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.453012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.462944] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.462998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.463011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.463017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.463023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.463037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.472961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.473059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.473072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.473079] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.473084] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.473098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.483000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.483096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.483109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.483116] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.483121] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.483136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.493032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.493086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.493099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.493106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.493112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.493130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.727 [2024-10-17 19:35:29.503057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.727 [2024-10-17 19:35:29.503109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.727 [2024-10-17 19:35:29.503122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.727 [2024-10-17 19:35:29.503128] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.727 [2024-10-17 19:35:29.503134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.727 [2024-10-17 19:35:29.503148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.727 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.513101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.513158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.513172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.513179] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.513185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.513199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.523187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.523246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.523259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.523266] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.523272] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.523286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.533149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.533241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.533254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.533261] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.533266] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.533280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.543165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.543214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.543230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.543237] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.543242] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.543256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.553222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.553280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.553293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.553300] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.553306] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.553319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.563203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.563287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.563300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.563306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.563312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.563327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.573250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.573309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.573322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.573329] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.573335] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.573349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.583306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.583360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.583373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.583379] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.583401] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.583416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.593342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.593396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.593410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.593416] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.593422] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.593436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.603396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.603481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.603494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.603501] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.603507] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.603521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.613406] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.613459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.613473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.613479] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.613485] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.613499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.989 qpair failed and we were unable to recover it. 00:28:05.989 [2024-10-17 19:35:29.623426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.989 [2024-10-17 19:35:29.623474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.989 [2024-10-17 19:35:29.623487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.989 [2024-10-17 19:35:29.623494] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.989 [2024-10-17 19:35:29.623500] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.989 [2024-10-17 19:35:29.623514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.633460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.633530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.633543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.633549] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.633555] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.633569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.643493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.643577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.643590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.643597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.643607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.643623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.653509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.653576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.653590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.653596] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.653607] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.653622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.663545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.663594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.663611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.663617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.663623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.663637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.673577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.673654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.673668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.673674] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.673683] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.673699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.683577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.683637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.683650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.683657] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.683663] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.683676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.693638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.693695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.693709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.693715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.693721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.693736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.703653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.703704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.703717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.703724] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.703730] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.703743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.713694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.713751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.713765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.713771] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.713777] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.713792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.723729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.723788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.723801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.723808] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.723814] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.723828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.733810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.733893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.733907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.733914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.733920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.733934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.743789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.743872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.743887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.743894] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.743901] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.743917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.753858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.753914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.753927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.753934] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.753940] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.753953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:05.990 [2024-10-17 19:35:29.763831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:05.990 [2024-10-17 19:35:29.763883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:05.990 [2024-10-17 19:35:29.763896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:05.990 [2024-10-17 19:35:29.763906] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:05.990 [2024-10-17 19:35:29.763912] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:05.990 [2024-10-17 19:35:29.763925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:05.990 qpair failed and we were unable to recover it. 00:28:06.251 [2024-10-17 19:35:29.773807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.251 [2024-10-17 19:35:29.773859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.251 [2024-10-17 19:35:29.773873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.251 [2024-10-17 19:35:29.773880] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.251 [2024-10-17 19:35:29.773886] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.773901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.783821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.783870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.783884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.783890] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.783896] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.783910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.793924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.793978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.793991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.793997] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.794003] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.794017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.803888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.803943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.803956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.803962] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.803968] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.803983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.813909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.813965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.813979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.813985] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.813991] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.814006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.823930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.823984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.823997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.824003] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.824009] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.824023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.833968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.834025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.834038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.834045] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.834051] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.834065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.843979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.844032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.844045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.844051] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.844057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.844072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.854129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.854190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.854203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.854213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.854219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.854232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.864044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.864094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.864107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.864113] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.864119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.864133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.874175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.874232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.874246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.874252] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.874258] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.874272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.884123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.884179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.884192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.884198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.884204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.884218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.894139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.894193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.894206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.894213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.894219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.894233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.904266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.904343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.904356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.904362] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.904368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.252 [2024-10-17 19:35:29.904382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.252 qpair failed and we were unable to recover it. 00:28:06.252 [2024-10-17 19:35:29.914243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.252 [2024-10-17 19:35:29.914330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.252 [2024-10-17 19:35:29.914343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.252 [2024-10-17 19:35:29.914349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.252 [2024-10-17 19:35:29.914356] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.914370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.924329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.924387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.924400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.924407] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.924413] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.924427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.934364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.934461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.934474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.934481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.934487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.934501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.944320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.944381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.944398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.944404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.944410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.944424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.954371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.954424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.954438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.954444] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.954450] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.954465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.964388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.964436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.964450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.964456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.964462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.964477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.974347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.974399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.974412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.974418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.974424] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.974438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.984405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.984505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.984518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.984525] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.984530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.984548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:29.994494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:29.994549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:29.994562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:29.994569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:29.994575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:29.994589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:30.004442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:30.004498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:30.004512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:30.004518] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:30.004524] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:30.004539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:30.014567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:30.014627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:30.014642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:30.014648] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:30.014654] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:30.014669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:30.024553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:30.024625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:30.024639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:30.024645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:30.024651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:30.024667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.253 [2024-10-17 19:35:30.034646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.253 [2024-10-17 19:35:30.034704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.253 [2024-10-17 19:35:30.034721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.253 [2024-10-17 19:35:30.034728] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.253 [2024-10-17 19:35:30.034734] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.253 [2024-10-17 19:35:30.034749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.253 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.044591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.044649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.044663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.044669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.044676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.044691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.054667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.054722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.054736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.054744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.054749] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.054765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.064740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.064844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.064859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.064866] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.064872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.064887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.074670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.074732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.074746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.074753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.074760] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.074777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.084764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.084821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.084836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.084842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.084848] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.084863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.094789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.094841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.094854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.094860] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.094867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.094881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.104788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.104840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.104855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.104861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.104868] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.104881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.514 [2024-10-17 19:35:30.114848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.514 [2024-10-17 19:35:30.114918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.514 [2024-10-17 19:35:30.114933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.514 [2024-10-17 19:35:30.114940] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.514 [2024-10-17 19:35:30.114946] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.514 [2024-10-17 19:35:30.114960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.514 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.124844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.124904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.124917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.124923] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.124929] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.124943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.134912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.134991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.135004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.135011] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.135016] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.135030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.144903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.145000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.145013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.145020] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.145025] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.145040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.155015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.155069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.155082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.155089] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.155095] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.155109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.164973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.165040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.165053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.165060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.165069] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.165083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.174924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.174985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.174999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.175005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.175011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.175025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.184987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.185040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.185054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.185060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.185066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.185080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.194991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.195048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.195061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.195068] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.195073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.195088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.205113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.205165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.205178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.205184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.205190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.205204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.215116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.215201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.215214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.215221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.215227] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.215241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.225195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.225248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.225261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.225267] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.225273] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.225287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.235188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.235245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.235258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.235265] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.235271] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.235285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.245155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.245207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.245227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.245233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.245240] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.245258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.255174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.515 [2024-10-17 19:35:30.255226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.515 [2024-10-17 19:35:30.255239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.515 [2024-10-17 19:35:30.255249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.515 [2024-10-17 19:35:30.255255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.515 [2024-10-17 19:35:30.255270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.515 qpair failed and we were unable to recover it. 00:28:06.515 [2024-10-17 19:35:30.265254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.516 [2024-10-17 19:35:30.265304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.516 [2024-10-17 19:35:30.265318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.516 [2024-10-17 19:35:30.265324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.516 [2024-10-17 19:35:30.265330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.516 [2024-10-17 19:35:30.265344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.516 qpair failed and we were unable to recover it. 00:28:06.516 [2024-10-17 19:35:30.275296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.516 [2024-10-17 19:35:30.275349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.516 [2024-10-17 19:35:30.275363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.516 [2024-10-17 19:35:30.275369] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.516 [2024-10-17 19:35:30.275375] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.516 [2024-10-17 19:35:30.275390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.516 qpair failed and we were unable to recover it. 00:28:06.516 [2024-10-17 19:35:30.285319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.516 [2024-10-17 19:35:30.285373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.516 [2024-10-17 19:35:30.285386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.516 [2024-10-17 19:35:30.285393] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.516 [2024-10-17 19:35:30.285399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.516 [2024-10-17 19:35:30.285413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.516 qpair failed and we were unable to recover it. 00:28:06.516 [2024-10-17 19:35:30.295389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.516 [2024-10-17 19:35:30.295439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.516 [2024-10-17 19:35:30.295453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.516 [2024-10-17 19:35:30.295460] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.516 [2024-10-17 19:35:30.295466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.516 [2024-10-17 19:35:30.295480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.516 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.305369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.305424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.305438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.305445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.305451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.305466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.315409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.315492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.315506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.315513] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.315518] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.315533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.325417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.325471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.325484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.325491] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.325497] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.325511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.335448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.335500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.335513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.335520] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.335526] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.335540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.345405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.345499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.345512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.345522] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.345528] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.345542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.355515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.355577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.355590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.355597] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.355606] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.355620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.365535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.365588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.365604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.365611] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.365617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.365631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.375597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.375651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.375664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.375671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.375677] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.375691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.385565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.385625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.385638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.385645] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.385651] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.385665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.395549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.395618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.395633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.395640] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.395646] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.395661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.405680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.405732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.405746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.405753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.405759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.405772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.415686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.415737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.777 [2024-10-17 19:35:30.415751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.777 [2024-10-17 19:35:30.415757] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.777 [2024-10-17 19:35:30.415763] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.777 [2024-10-17 19:35:30.415778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.777 qpair failed and we were unable to recover it. 00:28:06.777 [2024-10-17 19:35:30.425704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.777 [2024-10-17 19:35:30.425770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.425783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.425789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.425795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.425809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.435803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.435858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.435874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.435881] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.435887] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.435901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.445777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.445833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.445848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.445854] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.445860] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.445874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.455794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.455845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.455858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.455865] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.455871] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.455885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.465827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.465883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.465896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.465903] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.465909] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.465923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.475859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.475912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.475925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.475932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.475938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.475954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.485887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.485940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.485953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.485960] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.485966] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.485980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.495841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.495913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.495926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.495932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.495938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.495952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.505868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.505922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.505935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.505941] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.505947] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.505961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.515980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.516060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.516074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.516080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.516086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.516100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.526005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.526065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.526081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.526088] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.526093] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.526108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.536023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.536084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.536097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.536103] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.536109] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.536123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.545997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.546047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.546061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.546067] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.778 [2024-10-17 19:35:30.546073] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.778 [2024-10-17 19:35:30.546087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.778 qpair failed and we were unable to recover it. 00:28:06.778 [2024-10-17 19:35:30.556103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:06.778 [2024-10-17 19:35:30.556159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:06.778 [2024-10-17 19:35:30.556171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:06.778 [2024-10-17 19:35:30.556178] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:06.779 [2024-10-17 19:35:30.556184] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:06.779 [2024-10-17 19:35:30.556198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.779 qpair failed and we were unable to recover it. 00:28:07.039 [2024-10-17 19:35:30.566114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.039 [2024-10-17 19:35:30.566191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.039 [2024-10-17 19:35:30.566206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.039 [2024-10-17 19:35:30.566213] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.039 [2024-10-17 19:35:30.566219] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.039 [2024-10-17 19:35:30.566237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.039 qpair failed and we were unable to recover it. 00:28:07.039 [2024-10-17 19:35:30.576263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.039 [2024-10-17 19:35:30.576318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.039 [2024-10-17 19:35:30.576331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.039 [2024-10-17 19:35:30.576337] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.039 [2024-10-17 19:35:30.576343] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.039 [2024-10-17 19:35:30.576357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.586232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.586283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.586296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.586303] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.586308] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.586323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.596232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.596286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.596300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.596306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.596312] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.596326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.606262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.606337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.606350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.606357] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.606363] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.606377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.616181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.616283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.616299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.616306] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.616311] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.616325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.626328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.626376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.626390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.626396] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.626402] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.626416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.636369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.636426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.636439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.636446] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.636451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.636465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.646295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.646372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.646386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.646392] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.646398] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.646412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.656290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.656339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.656353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.656359] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.656368] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.656382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.666310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.666370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.666383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.666389] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.666395] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.666409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.676425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.676482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.676495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.676502] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.676508] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.676522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.686455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.686511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.686524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.686531] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.686537] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.686551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.696471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.696521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.696534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.696541] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.696547] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.696562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.706507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.706561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.706574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.706581] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.706587] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.706605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.040 [2024-10-17 19:35:30.716548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.040 [2024-10-17 19:35:30.716606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.040 [2024-10-17 19:35:30.716620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.040 [2024-10-17 19:35:30.716627] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.040 [2024-10-17 19:35:30.716633] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.040 [2024-10-17 19:35:30.716647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.040 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.726547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.726595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.726611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.726618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.726624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.726638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.736581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.736649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.736662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.736669] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.736675] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.736689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.746629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.746676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.746690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.746696] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.746705] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.746720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.756646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.756703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.756717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.756723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.756729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.756743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.766668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.766721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.766734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.766741] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.766747] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.766761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.776700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.776757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.776771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.776777] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.776783] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.776797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.786776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.786838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.786851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.786857] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.786863] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.786878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.796777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.796835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.796849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.796855] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.796861] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.796875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.806795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.806851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.806864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.806870] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.806876] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.806890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.041 [2024-10-17 19:35:30.816830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.041 [2024-10-17 19:35:30.816901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.041 [2024-10-17 19:35:30.816914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.041 [2024-10-17 19:35:30.816921] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.041 [2024-10-17 19:35:30.816926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.041 [2024-10-17 19:35:30.816941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.041 qpair failed and we were unable to recover it. 00:28:07.302 [2024-10-17 19:35:30.826853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.302 [2024-10-17 19:35:30.826916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.302 [2024-10-17 19:35:30.826930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.302 [2024-10-17 19:35:30.826937] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.302 [2024-10-17 19:35:30.826943] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.302 [2024-10-17 19:35:30.826958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.302 qpair failed and we were unable to recover it. 00:28:07.302 [2024-10-17 19:35:30.836837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.302 [2024-10-17 19:35:30.836892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.302 [2024-10-17 19:35:30.836905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.836915] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.836920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.836935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.846842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.846923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.846937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.846943] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.846949] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.846962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.856933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.856985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.856998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.857004] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.857010] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.857024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.866921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.867014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.867027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.867034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.867040] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.867054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.876967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.877022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.877035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.877042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.877047] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.877061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.886986] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.887041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.887054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.887060] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.887066] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.887080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.897064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.897121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.897133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.897140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.897145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.897159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.907070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.907120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.907133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.907140] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.907146] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.907160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.917107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.917161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.917174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.917181] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.917187] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.917201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.927157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.927210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.927226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.927233] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.927239] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.927253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.937155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.937225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.937238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.937245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.937251] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.937264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.947167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.947219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.947233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.947239] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.947245] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.947259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.957205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.957261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.957274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.957281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.957287] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.957302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.967177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.967230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.967243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.303 [2024-10-17 19:35:30.967250] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.303 [2024-10-17 19:35:30.967257] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.303 [2024-10-17 19:35:30.967272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.303 qpair failed and we were unable to recover it. 00:28:07.303 [2024-10-17 19:35:30.977203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.303 [2024-10-17 19:35:30.977258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.303 [2024-10-17 19:35:30.977271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:30.977278] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:30.977284] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:30.977298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:30.987313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:30.987381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:30.987395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:30.987401] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:30.987407] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:30.987422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:30.997336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:30.997393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:30.997407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:30.997413] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:30.997419] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:30.997433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.007356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.007408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.007421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.007427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.007433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.007447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.017443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.017529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.017545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.017552] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.017557] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.017571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.027402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.027455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.027468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.027475] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.027481] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.027495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.037492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.037550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.037563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.037569] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.037575] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.037589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.047445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.047494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.047507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.047514] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.047519] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.047533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.057480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.057539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.057553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.057559] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.057565] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.057582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.067511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.067557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.067570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.067577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.067582] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.067597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.304 [2024-10-17 19:35:31.077544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.304 [2024-10-17 19:35:31.077597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.304 [2024-10-17 19:35:31.077615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.304 [2024-10-17 19:35:31.077621] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.304 [2024-10-17 19:35:31.077627] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.304 [2024-10-17 19:35:31.077642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.304 qpair failed and we were unable to recover it. 00:28:07.565 [2024-10-17 19:35:31.087578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.565 [2024-10-17 19:35:31.087641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.565 [2024-10-17 19:35:31.087654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.565 [2024-10-17 19:35:31.087662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.087668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.087682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.097642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.097696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.097709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.097715] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.097721] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.097735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.107629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.107683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.107704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.107710] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.107716] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.107730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.117684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.117756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.117769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.117776] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.117782] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.117796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.127684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.127741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.127755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.127761] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.127768] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.127782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.137742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.137793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.137807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.137814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.137820] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.137835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.147691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.147744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.147758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.147765] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.147775] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.147790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.157759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.157816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.157830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.157836] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.157842] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.157858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.167732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.167793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.167807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.167815] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.167822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.167836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.177814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.177893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.177907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.177914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.177921] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.177936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.187854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.187908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.187922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.187928] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.187934] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.187949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.197848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.197909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.197923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.197930] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.197938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.197953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.207957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.208058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.208071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.208077] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.208083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.208097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.217943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.217994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.218007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.218014] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.566 [2024-10-17 19:35:31.218020] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.566 [2024-10-17 19:35:31.218034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.566 qpair failed and we were unable to recover it. 00:28:07.566 [2024-10-17 19:35:31.227983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.566 [2024-10-17 19:35:31.228044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.566 [2024-10-17 19:35:31.228058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.566 [2024-10-17 19:35:31.228064] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.228070] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.228085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.237947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.238022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.238036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.238042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.238052] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.238066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.248029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.248081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.248094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.248101] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.248107] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.248121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.257997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.258046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.258059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.258066] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.258072] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.258086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.268090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.268147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.268161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.268168] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.268174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.268188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.278098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.278154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.278167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.278173] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.278179] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.278193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.288142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.288204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.288218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.288225] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.288231] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.288245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.298154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.298205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.298218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.298224] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.298230] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.298243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.308132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.308182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.308195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.308202] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.308208] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.308222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.318235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.318293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.318306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.318313] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.318319] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.318332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.328197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.328269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.328283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.328292] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.328298] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.328312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.338264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.338316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.338330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.338336] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.338342] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.338356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.567 [2024-10-17 19:35:31.348293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.567 [2024-10-17 19:35:31.348346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.567 [2024-10-17 19:35:31.348359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.567 [2024-10-17 19:35:31.348366] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.567 [2024-10-17 19:35:31.348371] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.567 [2024-10-17 19:35:31.348385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.567 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.358392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.358462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.358476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.358483] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.358489] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.358503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.368309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.368399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.368413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.368420] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.368426] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.368440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.378382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.378438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.378451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.378458] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.378464] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.378478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.388441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.388536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.388549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.388556] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.388561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.388575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.398451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.398503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.398517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.398524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.398530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.398544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.408468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.408524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.408537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.408544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.408550] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.408564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.418499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.418552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.418566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.418575] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.418581] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.418595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.428528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.428618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.428632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.428638] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.428644] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.428658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.438556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.438623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.438636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.438643] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.438649] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.438662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.448593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.448679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.448694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.448701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.448706] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.448721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.458597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.458650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.458664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.458670] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.458676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.458690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.468641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.829 [2024-10-17 19:35:31.468700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.829 [2024-10-17 19:35:31.468713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.829 [2024-10-17 19:35:31.468720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.829 [2024-10-17 19:35:31.468725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.829 [2024-10-17 19:35:31.468740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.829 qpair failed and we were unable to recover it. 00:28:07.829 [2024-10-17 19:35:31.478675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.478730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.478743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.478750] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.478756] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.478771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.488708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.488775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.488789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.488795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.488801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.488815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.498718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.498770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.498784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.498790] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.498796] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.498810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.508766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.508820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.508836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.508843] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.508849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.508863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.518784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.518838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.518851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.518858] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.518864] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.518878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.528806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.528862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.528875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.528882] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.528888] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.528902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.538760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.538812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.538825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.538832] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.538838] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.538851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.548797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.548850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.548864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.548871] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.548877] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.548895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.558894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.558970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.558983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.558990] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.558996] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.559010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.568930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.569010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.569023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.569030] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.569035] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.569050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.578902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.578976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.578989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.578995] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.579001] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.579015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.589006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.589065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.589078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.589085] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.589091] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.589105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.598998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.599052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.599069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.599075] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.599081] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.599095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:07.830 [2024-10-17 19:35:31.609033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:07.830 [2024-10-17 19:35:31.609087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:07.830 [2024-10-17 19:35:31.609101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:07.830 [2024-10-17 19:35:31.609107] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:07.830 [2024-10-17 19:35:31.609113] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:07.830 [2024-10-17 19:35:31.609127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:07.830 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.619053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.619109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.619123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.619130] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.619136] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.619150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.629081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.629133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.629146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.629153] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.629159] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.629173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.639126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.639201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.639214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.639221] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.639226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.639243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.649180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.649230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.649243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.649249] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.649255] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.649270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.659181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.659235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.659249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.659255] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.659261] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.659275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.669195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.669249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.669262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.091 [2024-10-17 19:35:31.669268] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.091 [2024-10-17 19:35:31.669274] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.091 [2024-10-17 19:35:31.669288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.091 qpair failed and we were unable to recover it. 00:28:08.091 [2024-10-17 19:35:31.679243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.091 [2024-10-17 19:35:31.679296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.091 [2024-10-17 19:35:31.679309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.679315] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.679321] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.679335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.689265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.689335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.689348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.689355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.689360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.689374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.699288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.699342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.699354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.699361] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.699367] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.699381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.709323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.709378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.709392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.709399] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.709404] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.709419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.719377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.719429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.719443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.719450] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.719456] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.719471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.729390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.729445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.729458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.729464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.729473] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.729488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.739408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.739461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.739475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.739481] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.739487] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.739502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.749430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.749503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.749517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.749524] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.749530] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.749544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.759505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.759559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.759572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.759578] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.759584] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.759598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.769491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.769546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.769559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.769565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.769571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.769585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.779498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.779557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.779570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.779577] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.779583] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.779597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.789577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.789653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.789666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.789673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.789678] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.789693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.799572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.799629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.799642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.799649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.799655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.799669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.809607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.809663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.809676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.809683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.809689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.092 [2024-10-17 19:35:31.809702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.092 qpair failed and we were unable to recover it. 00:28:08.092 [2024-10-17 19:35:31.819624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.092 [2024-10-17 19:35:31.819678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.092 [2024-10-17 19:35:31.819692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.092 [2024-10-17 19:35:31.819701] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.092 [2024-10-17 19:35:31.819707] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.819722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-10-17 19:35:31.829653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.093 [2024-10-17 19:35:31.829704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.093 [2024-10-17 19:35:31.829717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.093 [2024-10-17 19:35:31.829723] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.093 [2024-10-17 19:35:31.829729] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.829743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-10-17 19:35:31.839680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.093 [2024-10-17 19:35:31.839745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.093 [2024-10-17 19:35:31.839757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.093 [2024-10-17 19:35:31.839764] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.093 [2024-10-17 19:35:31.839769] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.839784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-10-17 19:35:31.849731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.093 [2024-10-17 19:35:31.849785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.093 [2024-10-17 19:35:31.849798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.093 [2024-10-17 19:35:31.849805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.093 [2024-10-17 19:35:31.849810] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.849825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-10-17 19:35:31.859731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.093 [2024-10-17 19:35:31.859786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.093 [2024-10-17 19:35:31.859799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.093 [2024-10-17 19:35:31.859805] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.093 [2024-10-17 19:35:31.859811] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.859826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.093 [2024-10-17 19:35:31.869716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.093 [2024-10-17 19:35:31.869769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.093 [2024-10-17 19:35:31.869782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.093 [2024-10-17 19:35:31.869789] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.093 [2024-10-17 19:35:31.869795] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.093 [2024-10-17 19:35:31.869809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.093 qpair failed and we were unable to recover it. 00:28:08.353 [2024-10-17 19:35:31.879812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.353 [2024-10-17 19:35:31.879893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.353 [2024-10-17 19:35:31.879907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.879914] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.879920] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.879935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.889837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.889892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.889906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.889912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.889918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.889932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.899857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.899909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.899924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.899932] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.899938] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.899952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.909832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.909886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.909899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.909912] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.909918] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.909933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.919915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.919968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.919981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.919988] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.919994] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.920008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.929943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.930021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.930034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.930041] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.930046] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.930061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.939960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.940014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.940027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.940034] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.940039] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.940054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.950029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.950132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.950146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.950152] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.950158] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.950172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.960074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.960174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.960188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.960194] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.960200] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.960214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.970052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.970107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.970121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.970127] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.970134] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.970148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.980072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.980123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.980136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.980143] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.980149] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.980163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:31.990094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:31.990164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:31.990177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:31.990184] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:31.990190] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:31.990204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:32.000159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:32.000256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:32.000274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:32.000281] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:32.000286] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:32.000301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:32.010144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:32.010197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:32.010210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:32.010217] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.354 [2024-10-17 19:35:32.010223] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.354 [2024-10-17 19:35:32.010238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.354 qpair failed and we were unable to recover it. 00:28:08.354 [2024-10-17 19:35:32.020177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.354 [2024-10-17 19:35:32.020234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.354 [2024-10-17 19:35:32.020247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.354 [2024-10-17 19:35:32.020254] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.020260] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.020274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.030245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.030331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.030344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.030351] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.030357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.030371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.040242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.040299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.040311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.040318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.040324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.040342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.050265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.050324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.050338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.050344] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.050350] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.050364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.060291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.060344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.060357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.060364] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.060370] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.060384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.070318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.070366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.070379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.070385] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.070391] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.070406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.080291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.080348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.080361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.080367] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.080373] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.080387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.090379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.090433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.090449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.090456] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.090462] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.090476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.100403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.100499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.100512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.100519] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.100525] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.100539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.110486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.110535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.110549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.110555] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.110561] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.110576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.120494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.120594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.120612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.120618] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.120624] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.120639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.355 [2024-10-17 19:35:32.130488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.355 [2024-10-17 19:35:32.130537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.355 [2024-10-17 19:35:32.130550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.355 [2024-10-17 19:35:32.130557] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.355 [2024-10-17 19:35:32.130562] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.355 [2024-10-17 19:35:32.130579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.355 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.140519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.140573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.140586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.140593] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.140599] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.140618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.150544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.150594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.150610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.150617] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.150623] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.150638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.160584] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.160647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.160660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.160667] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.160673] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.160687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.170634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.170716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.170729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.170736] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.170742] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.170756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.180639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.180695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.180711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.180718] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.180724] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.180738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.190650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.190700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.190712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.190719] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.190725] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.616 [2024-10-17 19:35:32.190740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.616 qpair failed and we were unable to recover it. 00:28:08.616 [2024-10-17 19:35:32.200634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.616 [2024-10-17 19:35:32.200689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.616 [2024-10-17 19:35:32.200702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.616 [2024-10-17 19:35:32.200708] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.616 [2024-10-17 19:35:32.200714] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.200728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.210738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.210790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.210803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.210810] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.210816] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.210830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.220730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.220784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.220796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.220803] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.220812] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.220827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.230754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.230805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.230819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.230825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.230831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.230845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.240798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.240900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.240914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.240920] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.240926] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.240940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.250842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.250899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.250912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.250919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.250924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.250939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.260895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.260977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.260990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.260996] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.261002] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.261016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.270881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.270937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.270950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.270957] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.270962] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.270976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.280901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.280992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.281006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.281013] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.281019] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.281033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.290947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.291004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.291017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.291024] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.291030] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.291044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.300965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.301050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.301064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.301070] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.301076] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.301091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.310974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.311028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.311041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.311048] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.311057] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.311071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.321016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.321071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.321085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.321092] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.321098] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.321112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.331033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.331086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.331099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.331106] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.331112] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.331126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.617 [2024-10-17 19:35:32.341080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.617 [2024-10-17 19:35:32.341148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.617 [2024-10-17 19:35:32.341161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.617 [2024-10-17 19:35:32.341167] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.617 [2024-10-17 19:35:32.341174] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.617 [2024-10-17 19:35:32.341188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.617 qpair failed and we were unable to recover it. 00:28:08.618 [2024-10-17 19:35:32.351085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.618 [2024-10-17 19:35:32.351135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.618 [2024-10-17 19:35:32.351148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.618 [2024-10-17 19:35:32.351154] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.618 [2024-10-17 19:35:32.351160] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.618 [2024-10-17 19:35:32.351174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.618 qpair failed and we were unable to recover it. 00:28:08.618 [2024-10-17 19:35:32.361132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.618 [2024-10-17 19:35:32.361196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.618 [2024-10-17 19:35:32.361210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.618 [2024-10-17 19:35:32.361216] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.618 [2024-10-17 19:35:32.361222] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.618 [2024-10-17 19:35:32.361236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.618 qpair failed and we were unable to recover it. 00:28:08.618 [2024-10-17 19:35:32.371148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.618 [2024-10-17 19:35:32.371201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.618 [2024-10-17 19:35:32.371214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.618 [2024-10-17 19:35:32.371220] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.618 [2024-10-17 19:35:32.371226] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.618 [2024-10-17 19:35:32.371240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.618 qpair failed and we were unable to recover it. 00:28:08.618 [2024-10-17 19:35:32.381172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.618 [2024-10-17 19:35:32.381226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.618 [2024-10-17 19:35:32.381240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.618 [2024-10-17 19:35:32.381247] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.618 [2024-10-17 19:35:32.381253] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.618 [2024-10-17 19:35:32.381267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.618 qpair failed and we were unable to recover it. 00:28:08.618 [2024-10-17 19:35:32.391199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.618 [2024-10-17 19:35:32.391253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.618 [2024-10-17 19:35:32.391265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.618 [2024-10-17 19:35:32.391272] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.618 [2024-10-17 19:35:32.391278] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.618 [2024-10-17 19:35:32.391292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.618 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.401238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.401307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.401321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.401331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.401337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.401351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.411256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.411311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.411324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.411330] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.411336] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.411350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.421283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.421370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.421384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.421391] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.421397] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.421411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.431319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.431370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.431384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.431390] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.431396] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.431410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.441272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.441330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.441343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.441349] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.441355] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.441370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.451370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.451423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.451438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.451445] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.451451] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.451465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.461428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.461495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.461508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.461515] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.461520] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.461535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.471423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.471488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.471501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.471507] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.471513] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.471527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.481498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.481568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.879 [2024-10-17 19:35:32.481582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.879 [2024-10-17 19:35:32.481588] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.879 [2024-10-17 19:35:32.481595] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.879 [2024-10-17 19:35:32.481614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.879 qpair failed and we were unable to recover it. 00:28:08.879 [2024-10-17 19:35:32.491492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.879 [2024-10-17 19:35:32.491546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.491562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.491568] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.491574] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.491588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.501511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.501564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.501578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.501584] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.501590] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.501607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.511577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.511649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.511664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.511671] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.511676] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.511691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.521575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.521635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.521649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.521656] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.521662] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.521676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.531593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.531653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.531667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.531673] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.531679] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.531693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.541622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.541674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.541688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.541694] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.541700] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.541714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.551680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.551733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.551746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.551753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.551758] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.551773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.561719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.561775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.561788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.561795] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.561801] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.561815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.571691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.571753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.571767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.571773] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.571779] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.571793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.581706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.581796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.581813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.581819] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.581825] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.581840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.591768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.591822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.591836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.591842] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.591849] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.591863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.601782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.601841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.601855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.601861] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.601867] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.601881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.611846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.611899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.611912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.611919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.611924] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.611938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.621755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.621809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.621822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.880 [2024-10-17 19:35:32.621828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.880 [2024-10-17 19:35:32.621834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.880 [2024-10-17 19:35:32.621854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.880 qpair failed and we were unable to recover it. 00:28:08.880 [2024-10-17 19:35:32.631814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.880 [2024-10-17 19:35:32.631899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.880 [2024-10-17 19:35:32.631913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.881 [2024-10-17 19:35:32.631919] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.881 [2024-10-17 19:35:32.631925] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.881 [2024-10-17 19:35:32.631938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.881 qpair failed and we were unable to recover it. 00:28:08.881 [2024-10-17 19:35:32.641903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.881 [2024-10-17 19:35:32.641956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.881 [2024-10-17 19:35:32.641970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.881 [2024-10-17 19:35:32.641976] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.881 [2024-10-17 19:35:32.641982] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.881 [2024-10-17 19:35:32.641996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.881 qpair failed and we were unable to recover it. 00:28:08.881 [2024-10-17 19:35:32.651952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.881 [2024-10-17 19:35:32.652007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.881 [2024-10-17 19:35:32.652020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.881 [2024-10-17 19:35:32.652026] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.881 [2024-10-17 19:35:32.652032] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.881 [2024-10-17 19:35:32.652046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.881 qpair failed and we were unable to recover it. 00:28:08.881 [2024-10-17 19:35:32.661895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:08.881 [2024-10-17 19:35:32.661977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:08.881 [2024-10-17 19:35:32.661992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:08.881 [2024-10-17 19:35:32.661999] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:08.881 [2024-10-17 19:35:32.662004] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:08.881 [2024-10-17 19:35:32.662019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:08.881 qpair failed and we were unable to recover it. 00:28:09.142 [2024-10-17 19:35:32.671939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.142 [2024-10-17 19:35:32.671994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.142 [2024-10-17 19:35:32.672011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.142 [2024-10-17 19:35:32.672018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.142 [2024-10-17 19:35:32.672023] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.142 [2024-10-17 19:35:32.672038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.142 qpair failed and we were unable to recover it. 00:28:09.142 [2024-10-17 19:35:32.682013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.142 [2024-10-17 19:35:32.682092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.142 [2024-10-17 19:35:32.682106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.142 [2024-10-17 19:35:32.682112] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.142 [2024-10-17 19:35:32.682119] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.142 [2024-10-17 19:35:32.682133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.142 qpair failed and we were unable to recover it. 00:28:09.142 [2024-10-17 19:35:32.691974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.142 [2024-10-17 19:35:32.692029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.142 [2024-10-17 19:35:32.692043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.142 [2024-10-17 19:35:32.692049] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.142 [2024-10-17 19:35:32.692055] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.142 [2024-10-17 19:35:32.692070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.142 qpair failed and we were unable to recover it. 00:28:09.142 [2024-10-17 19:35:32.702107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.702161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.702174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.702180] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.702186] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.702200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.712108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.712157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.712170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.712176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.712185] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.712200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.722073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.722128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.722142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.722148] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.722154] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.722168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.732091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.732142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.732155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.732162] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.732168] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.732183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.742176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.742225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.742238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.742245] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.742250] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.742264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.752256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.752335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.752348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.752355] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.752360] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.752374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.762253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.762312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.762325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.762331] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.762337] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.762351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.772228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.772308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.772322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.772328] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.772334] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.772349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.782280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.782333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.782346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.782352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.782358] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.782372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.792320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.792374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.792387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.792394] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.792399] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.792413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.802363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.802423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.802436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.802443] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.802457] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.802472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.812403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.812464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.812477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.812484] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.812490] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.812505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.822336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.822390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.822403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.822410] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.822416] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.822430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.832351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.832415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.143 [2024-10-17 19:35:32.832428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.143 [2024-10-17 19:35:32.832435] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.143 [2024-10-17 19:35:32.832441] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.143 [2024-10-17 19:35:32.832454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.143 qpair failed and we were unable to recover it. 00:28:09.143 [2024-10-17 19:35:32.842423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.143 [2024-10-17 19:35:32.842513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.842526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.842532] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.842538] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.842552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.852486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.852560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.852573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.852579] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.852586] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.852603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.862490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.862545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.862559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.862565] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.862571] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.862585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.872474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.872525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.872539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.872545] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.872551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.872565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.882606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.882663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.882677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.882683] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.882689] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.882703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.892604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.892659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.892673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.892682] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.892688] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.892702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.902646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.902701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.902714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.902720] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.902726] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.902740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.912617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.912669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.912683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.912689] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.912695] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.912709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.144 [2024-10-17 19:35:32.922743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.144 [2024-10-17 19:35:32.922801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.144 [2024-10-17 19:35:32.922815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.144 [2024-10-17 19:35:32.922822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.144 [2024-10-17 19:35:32.922828] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.144 [2024-10-17 19:35:32.922842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.144 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.932732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.932793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.932807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.932814] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.932819] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.932835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.942702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.942795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.942810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.942816] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.942822] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.942836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.952726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.952802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.952815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.952822] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.952827] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.952842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.962842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.962915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.962928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.962935] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.962941] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.962954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.972831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.972888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.972901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.972907] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.972913] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.972928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.982850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.982931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.982945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.982955] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.982960] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.982974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:32.992905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:32.992980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:32.992993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:32.993000] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:32.993006] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:32.993020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:33.002911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:33.002985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:33.002999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:33.003005] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:33.003011] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:33.003026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:33.012943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:33.012992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:33.013006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.405 [2024-10-17 19:35:33.013012] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.405 [2024-10-17 19:35:33.013018] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.405 [2024-10-17 19:35:33.013032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.405 qpair failed and we were unable to recover it. 00:28:09.405 [2024-10-17 19:35:33.022972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.405 [2024-10-17 19:35:33.023023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.405 [2024-10-17 19:35:33.023036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.023042] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.023048] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.023062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.032997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.033060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.033073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.033080] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.033086] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.033099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.043068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.043119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.043133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.043139] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.043145] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.043159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.053056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.053149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.053163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.053169] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.053175] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.053190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.063117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.063170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.063183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.063190] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.063196] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.063210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.073116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.073172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.073189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.073195] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.073201] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.073215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.083153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.083224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.083237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.083244] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.083249] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.083264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.093191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.093266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.093279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.093285] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.093291] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.093305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.103251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.103304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.103318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.103324] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.103330] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.103344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.113248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.113301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.113314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.113321] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.113327] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.113343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.123278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.123332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.123345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.123352] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.123357] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.123371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.133288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.133343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.133357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.133363] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.133369] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.133383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.143314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.143368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.143381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.143387] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.143393] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.143408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.153345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.153398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.153412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.153418] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.406 [2024-10-17 19:35:33.153425] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.406 [2024-10-17 19:35:33.153439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.406 qpair failed and we were unable to recover it. 00:28:09.406 [2024-10-17 19:35:33.163384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.406 [2024-10-17 19:35:33.163438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.406 [2024-10-17 19:35:33.163454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.406 [2024-10-17 19:35:33.163461] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.407 [2024-10-17 19:35:33.163466] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.407 [2024-10-17 19:35:33.163480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.407 qpair failed and we were unable to recover it. 00:28:09.407 [2024-10-17 19:35:33.173324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.407 [2024-10-17 19:35:33.173384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.407 [2024-10-17 19:35:33.173397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.407 [2024-10-17 19:35:33.173404] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.407 [2024-10-17 19:35:33.173410] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.407 [2024-10-17 19:35:33.173424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.407 qpair failed and we were unable to recover it. 00:28:09.407 [2024-10-17 19:35:33.183447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.407 [2024-10-17 19:35:33.183506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.407 [2024-10-17 19:35:33.183520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.407 [2024-10-17 19:35:33.183528] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.407 [2024-10-17 19:35:33.183533] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.407 [2024-10-17 19:35:33.183548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.407 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.193494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.193550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.193565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.193572] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.193578] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.193593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.203522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.203583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.203598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.203608] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.203617] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.203632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.213522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.213579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.213593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.213604] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.213610] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.213625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.223548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.223605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.223619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.223626] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.223632] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.223647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.233574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.233626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.233639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.233646] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.233652] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.233667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.243611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.243665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.243679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.243685] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.243691] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.243705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.253668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.253723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.253737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.253744] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.253750] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.253765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.263710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.263758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.263771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.263778] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.263784] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.263798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.273686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.273738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.273751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.273758] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.273764] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.273778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.283674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.283732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.283746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.283753] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.283759] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.283773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.293760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.293842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.293856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.293863] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.293872] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.293887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.303750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.303808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.303821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.303828] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.303834] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.303849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.313803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.313854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.313868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.313874] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.313880] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.313894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.668 qpair failed and we were unable to recover it. 00:28:09.668 [2024-10-17 19:35:33.323893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.668 [2024-10-17 19:35:33.323998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.668 [2024-10-17 19:35:33.324012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.668 [2024-10-17 19:35:33.324018] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.668 [2024-10-17 19:35:33.324024] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.668 [2024-10-17 19:35:33.324038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.333865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.333919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.333933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.333939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.333945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.333959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.343896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.343951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.343965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.343972] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.343978] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.343992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.353920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.353987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.354000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.354006] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.354012] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.354026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.363962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.364018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.364031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.364037] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.364043] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.364057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.373977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.374032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.374045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.374052] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.374058] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.374072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.384042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.384096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.384109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.384118] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.384124] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.384138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.394070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.394126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.394139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.394146] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.394152] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.394166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.404081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.404137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.404150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.404156] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.404162] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.404176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.414139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.414218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.414231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.414238] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.414244] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.414258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.424126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.424178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.424191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.424198] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.424204] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.424217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.434134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.434185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.434198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.434205] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.434211] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.434225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.669 [2024-10-17 19:35:33.444245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.669 [2024-10-17 19:35:33.444299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.669 [2024-10-17 19:35:33.444312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.669 [2024-10-17 19:35:33.444318] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.669 [2024-10-17 19:35:33.444324] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.669 [2024-10-17 19:35:33.444338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.669 qpair failed and we were unable to recover it. 00:28:09.929 [2024-10-17 19:35:33.454206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.929 [2024-10-17 19:35:33.454302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.929 [2024-10-17 19:35:33.454319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.929 [2024-10-17 19:35:33.454325] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.929 [2024-10-17 19:35:33.454332] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.929 [2024-10-17 19:35:33.454346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.929 qpair failed and we were unable to recover it. 00:28:09.929 [2024-10-17 19:35:33.464286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.929 [2024-10-17 19:35:33.464349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.929 [2024-10-17 19:35:33.464363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.929 [2024-10-17 19:35:33.464370] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.929 [2024-10-17 19:35:33.464376] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.929 [2024-10-17 19:35:33.464390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.929 qpair failed and we were unable to recover it. 00:28:09.929 [2024-10-17 19:35:33.474317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.929 [2024-10-17 19:35:33.474386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.929 [2024-10-17 19:35:33.474399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.929 [2024-10-17 19:35:33.474409] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.929 [2024-10-17 19:35:33.474415] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.929 [2024-10-17 19:35:33.474429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.929 qpair failed and we were unable to recover it. 00:28:09.929 [2024-10-17 19:35:33.484314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.929 [2024-10-17 19:35:33.484367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.929 [2024-10-17 19:35:33.484380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.929 [2024-10-17 19:35:33.484386] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.929 [2024-10-17 19:35:33.484392] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.929 [2024-10-17 19:35:33.484406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.494351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.494407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.494421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.494427] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.494433] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.494447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.504361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.504444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.504458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.504464] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.504470] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.504484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.514386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.514439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.514452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.514459] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.514465] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.514479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.524387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.524441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.524455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.524462] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.524468] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.524482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.534444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.534496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.534510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.534516] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.534522] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.534536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.544472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.544524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.544537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.544544] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.544551] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.544565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.554535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.554587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.554603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.554610] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.554616] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.554630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.564537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.564627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.564643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.564649] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.564655] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.564669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.574562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.574621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.574634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.574641] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.574647] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.574661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.584588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.584642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.584655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.584662] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.584668] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.584682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.594637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.594703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.594716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.594722] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.594728] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.594742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.930 [2024-10-17 19:35:33.604671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.930 [2024-10-17 19:35:33.604729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.930 [2024-10-17 19:35:33.604742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.930 [2024-10-17 19:35:33.604748] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.930 [2024-10-17 19:35:33.604754] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.930 [2024-10-17 19:35:33.604771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.930 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.614712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.614798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.614811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.614817] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.614823] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.614837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.624708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.624759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.624772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.624779] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.624785] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.624798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.634742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.634806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.634819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.634825] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.634831] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.634845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.644776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.644830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.644843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.644850] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.644856] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.644870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.654821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.654877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.654893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.654900] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.654905] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.654920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.664871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.664920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.664933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.664939] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.664945] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.664959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.674845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.674903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.674916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.674922] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.674928] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f84fc000b90 00:28:09.931 [2024-10-17 19:35:33.674942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.684922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.685022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.685075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.685098] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.685117] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8500000b90 00:28:09.931 [2024-10-17 19:35:33.685164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.694908] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.694977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.695004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.695017] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.695029] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8500000b90 00:28:09.931 [2024-10-17 19:35:33.695062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:09.931 qpair failed and we were unable to recover it. 00:28:09.931 [2024-10-17 19:35:33.704943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:09.931 [2024-10-17 19:35:33.705032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:09.931 [2024-10-17 19:35:33.705085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:09.931 [2024-10-17 19:35:33.705109] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:09.931 [2024-10-17 19:35:33.705130] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8508000b90 00:28:09.931 [2024-10-17 19:35:33.705177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:09.931 qpair failed and we were unable to recover it. 00:28:10.191 [2024-10-17 19:35:33.714985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.191 [2024-10-17 19:35:33.715051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.191 [2024-10-17 19:35:33.715078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.191 [2024-10-17 19:35:33.715091] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.191 [2024-10-17 19:35:33.715103] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8508000b90 00:28:10.191 [2024-10-17 19:35:33.715129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:10.191 qpair failed and we were unable to recover it. 00:28:10.191 [2024-10-17 19:35:33.715241] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:10.191 A controller has encountered a failure and is being reset. 00:28:10.191 [2024-10-17 19:35:33.724994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.191 [2024-10-17 19:35:33.725098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.191 [2024-10-17 19:35:33.725152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.191 [2024-10-17 19:35:33.725176] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.191 [2024-10-17 19:35:33.725197] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb48ca0 00:28:10.191 [2024-10-17 19:35:33.725240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.191 qpair failed and we were unable to recover it. 00:28:10.191 [2024-10-17 19:35:33.734958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:10.191 [2024-10-17 19:35:33.735034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:10.191 [2024-10-17 19:35:33.735058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:10.191 [2024-10-17 19:35:33.735071] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:10.191 [2024-10-17 19:35:33.735083] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xb48ca0 00:28:10.191 [2024-10-17 19:35:33.735109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:10.191 qpair failed and we were unable to recover it. 00:28:10.191 Controller properly reset. 00:28:10.191 Initializing NVMe Controllers 00:28:10.191 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:10.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:10.191 Initialization complete. Launching workers. 00:28:10.191 Starting thread on core 1 00:28:10.191 Starting thread on core 2 00:28:10.191 Starting thread on core 3 00:28:10.191 Starting thread on core 0 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:10.191 00:28:10.191 real 0m10.803s 00:28:10.191 user 0m19.169s 00:28:10.191 sys 0m4.718s 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.191 ************************************ 00:28:10.191 END TEST nvmf_target_disconnect_tc2 00:28:10.191 ************************************ 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.191 rmmod nvme_tcp 00:28:10.191 rmmod nvme_fabrics 00:28:10.191 rmmod nvme_keyring 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 2257433 ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 2257433 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2257433 ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2257433 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2257433 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2257433' 00:28:10.191 killing process with pid 2257433 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2257433 00:28:10.191 19:35:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2257433 00:28:10.450 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:10.450 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:10.450 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.451 19:35:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.988 00:28:12.988 real 0m19.577s 00:28:12.988 user 0m46.938s 00:28:12.988 sys 0m9.613s 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:12.988 ************************************ 00:28:12.988 END TEST nvmf_target_disconnect 00:28:12.988 ************************************ 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:12.988 00:28:12.988 real 5m52.525s 00:28:12.988 user 10m32.636s 00:28:12.988 sys 1m58.765s 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:12.988 19:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.988 ************************************ 00:28:12.988 END TEST nvmf_host 00:28:12.988 ************************************ 00:28:12.988 19:35:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:12.988 19:35:36 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:12.988 19:35:36 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:12.988 19:35:36 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:12.988 19:35:36 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.988 19:35:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.988 ************************************ 00:28:12.988 START TEST nvmf_target_core_interrupt_mode 00:28:12.988 ************************************ 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:12.988 * Looking for test storage... 00:28:12.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:12.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.988 --rc genhtml_branch_coverage=1 00:28:12.988 --rc genhtml_function_coverage=1 00:28:12.988 --rc genhtml_legend=1 00:28:12.988 --rc geninfo_all_blocks=1 00:28:12.988 --rc geninfo_unexecuted_blocks=1 00:28:12.988 00:28:12.988 ' 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:12.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.988 --rc genhtml_branch_coverage=1 00:28:12.988 --rc genhtml_function_coverage=1 00:28:12.988 --rc genhtml_legend=1 00:28:12.988 --rc geninfo_all_blocks=1 00:28:12.988 --rc geninfo_unexecuted_blocks=1 00:28:12.988 00:28:12.988 ' 00:28:12.988 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:12.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.988 --rc genhtml_branch_coverage=1 00:28:12.988 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:12.989 ************************************ 00:28:12.989 START TEST nvmf_abort 00:28:12.989 ************************************ 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:12.989 * Looking for test storage... 00:28:12.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:12.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.990 --rc genhtml_branch_coverage=1 00:28:12.990 --rc genhtml_function_coverage=1 00:28:12.990 --rc genhtml_legend=1 00:28:12.990 --rc geninfo_all_blocks=1 00:28:12.990 --rc geninfo_unexecuted_blocks=1 00:28:12.990 00:28:12.990 ' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:12.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.990 --rc genhtml_branch_coverage=1 00:28:12.990 --rc genhtml_function_coverage=1 00:28:12.990 --rc genhtml_legend=1 00:28:12.990 --rc geninfo_all_blocks=1 00:28:12.990 --rc geninfo_unexecuted_blocks=1 00:28:12.990 00:28:12.990 ' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.990 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:19.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:19.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.563 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:19.563 Found net devices under 0000:86:00.0: cvl_0_0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:19.564 Found net devices under 0000:86:00.1: cvl_0_1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:19.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:28:19.564 00:28:19.564 --- 10.0.0.2 ping statistics --- 00:28:19.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.564 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:28:19.564 00:28:19.564 --- 10.0.0.1 ping statistics --- 00:28:19.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.564 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=2261974 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 2261974 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2261974 ']' 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 [2024-10-17 19:35:42.744504] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:19.564 [2024-10-17 19:35:42.745449] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:28:19.564 [2024-10-17 19:35:42.745488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.564 [2024-10-17 19:35:42.825821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.564 [2024-10-17 19:35:42.865151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.564 [2024-10-17 19:35:42.865185] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.564 [2024-10-17 19:35:42.865193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.564 [2024-10-17 19:35:42.865199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.564 [2024-10-17 19:35:42.865204] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.564 [2024-10-17 19:35:42.866584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.564 [2024-10-17 19:35:42.866691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.564 [2024-10-17 19:35:42.866692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.564 [2024-10-17 19:35:42.932404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:19.564 [2024-10-17 19:35:42.933079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:19.564 [2024-10-17 19:35:42.933285] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:19.564 [2024-10-17 19:35:42.933431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.564 19:35:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 [2024-10-17 19:35:43.011467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 Malloc0 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.564 Delay0 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.564 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.565 [2024-10-17 19:35:43.099438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.565 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:19.565 [2024-10-17 19:35:43.184406] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:21.469 Initializing NVMe Controllers 00:28:21.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:21.469 controller IO queue size 128 less than required 00:28:21.469 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:21.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:21.469 Initialization complete. Launching workers. 00:28:21.469 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38271 00:28:21.469 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38328, failed to submit 66 00:28:21.469 success 38271, unsuccessful 57, failed 0 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:21.469 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:21.469 rmmod nvme_tcp 00:28:21.469 rmmod nvme_fabrics 00:28:21.469 rmmod nvme_keyring 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 2261974 ']' 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2261974 ']' 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2261974' 00:28:21.728 killing process with pid 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2261974 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:28:21.728 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:28:21.987 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:21.987 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:21.987 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.987 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.987 19:35:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:23.892 00:28:23.892 real 0m11.032s 00:28:23.892 user 0m9.976s 00:28:23.892 sys 0m5.675s 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.892 ************************************ 00:28:23.892 END TEST nvmf_abort 00:28:23.892 ************************************ 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:23.892 ************************************ 00:28:23.892 START TEST nvmf_ns_hotplug_stress 00:28:23.892 ************************************ 00:28:23.892 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:24.152 * Looking for test storage... 00:28:24.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:24.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.152 --rc genhtml_branch_coverage=1 00:28:24.152 --rc genhtml_function_coverage=1 00:28:24.152 --rc genhtml_legend=1 00:28:24.152 --rc geninfo_all_blocks=1 00:28:24.152 --rc geninfo_unexecuted_blocks=1 00:28:24.152 00:28:24.152 ' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:24.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.152 --rc genhtml_branch_coverage=1 00:28:24.152 --rc genhtml_function_coverage=1 00:28:24.152 --rc genhtml_legend=1 00:28:24.152 --rc geninfo_all_blocks=1 00:28:24.152 --rc geninfo_unexecuted_blocks=1 00:28:24.152 00:28:24.152 ' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:24.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.152 --rc genhtml_branch_coverage=1 00:28:24.152 --rc genhtml_function_coverage=1 00:28:24.152 --rc genhtml_legend=1 00:28:24.152 --rc geninfo_all_blocks=1 00:28:24.152 --rc geninfo_unexecuted_blocks=1 00:28:24.152 00:28:24.152 ' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:24.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.152 --rc genhtml_branch_coverage=1 00:28:24.152 --rc genhtml_function_coverage=1 00:28:24.152 --rc genhtml_legend=1 00:28:24.152 --rc geninfo_all_blocks=1 00:28:24.152 --rc geninfo_unexecuted_blocks=1 00:28:24.152 00:28:24.152 ' 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:24.152 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.153 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:30.724 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:30.724 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:30.724 Found net devices under 0000:86:00.0: cvl_0_0 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:30.724 Found net devices under 0000:86:00.1: cvl_0_1 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:30.724 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:30.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:28:30.725 00:28:30.725 --- 10.0.0.2 ping statistics --- 00:28:30.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.725 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:28:30.725 00:28:30.725 --- 10.0.0.1 ping statistics --- 00:28:30.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.725 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=2265965 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 2265965 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2265965 ']' 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:30.725 19:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:30.725 [2024-10-17 19:35:53.855171] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:30.725 [2024-10-17 19:35:53.856024] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:28:30.725 [2024-10-17 19:35:53.856053] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.725 [2024-10-17 19:35:53.931826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:30.725 [2024-10-17 19:35:53.971543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.725 [2024-10-17 19:35:53.971575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.725 [2024-10-17 19:35:53.971583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.725 [2024-10-17 19:35:53.971589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.725 [2024-10-17 19:35:53.971596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.725 [2024-10-17 19:35:53.972914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:30.725 [2024-10-17 19:35:53.973021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.725 [2024-10-17 19:35:53.973022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:30.725 [2024-10-17 19:35:54.038438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:30.725 [2024-10-17 19:35:54.039114] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:30.725 [2024-10-17 19:35:54.039499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:30.725 [2024-10-17 19:35:54.039595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:30.725 [2024-10-17 19:35:54.273865] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:30.725 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.985 [2024-10-17 19:35:54.682358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:30.985 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:31.243 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:31.502 Malloc0 00:28:31.502 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:31.761 Delay0 00:28:31.761 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.761 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:32.020 NULL1 00:28:32.020 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:32.279 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2266272 00:28:32.279 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:32.279 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:32.279 19:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.656 Read completed with error (sct=0, sc=11) 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 19:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.656 19:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:33.656 19:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:33.915 true 00:28:33.915 19:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:33.915 19:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.852 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:34.852 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:34.852 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:35.110 true 00:28:35.110 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:35.110 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.110 19:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.370 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:35.370 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:35.628 true 00:28:35.628 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:35.628 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:36.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.564 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:36.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:36.823 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:36.823 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:37.082 true 00:28:37.082 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:37.082 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.019 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.019 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.019 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:38.019 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:38.277 true 00:28:38.277 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:38.277 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.535 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.793 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:38.793 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:38.793 true 00:28:39.051 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:39.051 19:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:39.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.987 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:39.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.245 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:40.245 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:40.504 true 00:28:40.504 19:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:40.504 19:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.440 19:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.440 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:41.440 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:41.440 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:41.699 true 00:28:41.699 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:41.699 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.699 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.957 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:41.957 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:42.215 true 00:28:42.215 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:42.215 19:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.591 19:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.591 19:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:43.591 19:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:43.848 true 00:28:43.849 19:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:43.849 19:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:44.506 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:44.763 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:44.763 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:45.021 true 00:28:45.021 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:45.021 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.279 19:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.538 19:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:45.538 19:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:45.538 true 00:28:45.797 19:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:45.797 19:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.732 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.990 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:46.990 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:46.990 true 00:28:46.990 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:46.990 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.249 19:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.507 19:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:47.507 19:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:47.766 true 00:28:47.766 19:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:47.766 19:36:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.702 19:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.961 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:48.961 19:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:48.961 19:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:49.220 true 00:28:49.220 19:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:49.220 19:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:49.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.047 19:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.047 19:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:50.047 19:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:50.306 true 00:28:50.306 19:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:50.306 19:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.565 19:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.824 19:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:50.824 19:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:50.824 true 00:28:50.824 19:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:50.824 19:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.201 19:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.201 19:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:52.201 19:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:52.460 true 00:28:52.460 19:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:52.460 19:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.396 19:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.396 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:53.396 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:53.661 true 00:28:53.661 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:53.661 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.923 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.181 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:54.181 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:54.181 true 00:28:54.181 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:54.181 19:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 19:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.557 19:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:55.557 19:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:55.816 true 00:28:55.816 19:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:55.816 19:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.752 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.752 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:56.752 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:57.011 true 00:28:57.011 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:57.011 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.271 19:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.271 19:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:57.271 19:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:57.529 true 00:28:57.529 19:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:57.529 19:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 19:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:58.905 19:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:58.905 19:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:59.164 true 00:28:59.164 19:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:28:59.164 19:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.101 19:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.101 19:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:00.101 19:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:00.360 true 00:29:00.360 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:29:00.360 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.618 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.876 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:00.876 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:00.876 true 00:29:00.876 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:29:00.876 19:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 19:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:02.252 19:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:02.252 19:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:02.510 true 00:29:02.510 19:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:29:02.510 19:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.446 Initializing NVMe Controllers 00:29:03.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:03.446 Controller IO queue size 128, less than required. 00:29:03.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.446 Controller IO queue size 128, less than required. 00:29:03.446 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:03.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:03.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:03.446 Initialization complete. Launching workers. 00:29:03.446 ======================================================== 00:29:03.446 Latency(us) 00:29:03.446 Device Information : IOPS MiB/s Average min max 00:29:03.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2172.49 1.06 42749.66 2771.22 1012071.13 00:29:03.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18562.78 9.06 6895.26 1081.59 370010.64 00:29:03.446 ======================================================== 00:29:03.446 Total : 20735.27 10.12 10651.83 1081.59 1012071.13 00:29:03.446 00:29:03.446 19:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.446 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:03.446 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:03.704 true 00:29:03.704 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2266272 00:29:03.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2266272) - No such process 00:29:03.704 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2266272 00:29:03.704 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.962 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:04.221 null0 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.221 19:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:04.480 null1 00:29:04.480 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:04.480 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.480 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:04.739 null2 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:04.739 null3 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.739 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:04.998 null4 00:29:04.998 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:04.998 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:04.998 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:05.257 null5 00:29:05.257 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:05.257 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:05.257 19:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:05.257 null6 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:05.517 null7 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.517 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2272290 2272291 2272294 2272295 2272297 2272299 2272301 2272303 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.518 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.777 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.036 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:06.296 19:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:06.296 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.555 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:06.556 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.815 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:07.074 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:07.333 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.333 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.333 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.334 19:36:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:07.593 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:07.852 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.112 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:08.372 19:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.372 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:08.631 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:08.889 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:09.147 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.407 19:36:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:09.407 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:09.407 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:09.407 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:09.407 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.666 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:09.925 rmmod nvme_tcp 00:29:09.925 rmmod nvme_fabrics 00:29:09.925 rmmod nvme_keyring 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 2265965 ']' 00:29:09.925 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 2265965 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2265965 ']' 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2265965 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2265965 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2265965' 00:29:09.926 killing process with pid 2265965 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2265965 00:29:09.926 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2265965 00:29:10.186 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:10.186 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.187 19:36:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:12.092 00:29:12.092 real 0m48.175s 00:29:12.092 user 2m59.689s 00:29:12.092 sys 0m21.011s 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:12.092 ************************************ 00:29:12.092 END TEST nvmf_ns_hotplug_stress 00:29:12.092 ************************************ 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.092 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:12.352 ************************************ 00:29:12.352 START TEST nvmf_delete_subsystem 00:29:12.352 ************************************ 00:29:12.352 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:12.352 * Looking for test storage... 00:29:12.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.352 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:12.352 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:29:12.352 19:36:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.352 --rc genhtml_branch_coverage=1 00:29:12.352 --rc genhtml_function_coverage=1 00:29:12.352 --rc genhtml_legend=1 00:29:12.352 --rc geninfo_all_blocks=1 00:29:12.352 --rc geninfo_unexecuted_blocks=1 00:29:12.352 00:29:12.352 ' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.352 --rc genhtml_branch_coverage=1 00:29:12.352 --rc genhtml_function_coverage=1 00:29:12.352 --rc genhtml_legend=1 00:29:12.352 --rc geninfo_all_blocks=1 00:29:12.352 --rc geninfo_unexecuted_blocks=1 00:29:12.352 00:29:12.352 ' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.352 --rc genhtml_branch_coverage=1 00:29:12.352 --rc genhtml_function_coverage=1 00:29:12.352 --rc genhtml_legend=1 00:29:12.352 --rc geninfo_all_blocks=1 00:29:12.352 --rc geninfo_unexecuted_blocks=1 00:29:12.352 00:29:12.352 ' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:12.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.352 --rc genhtml_branch_coverage=1 00:29:12.352 --rc genhtml_function_coverage=1 00:29:12.352 --rc genhtml_legend=1 00:29:12.352 --rc geninfo_all_blocks=1 00:29:12.352 --rc geninfo_unexecuted_blocks=1 00:29:12.352 00:29:12.352 ' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.352 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:12.353 19:36:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:18.924 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:18.924 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:18.924 Found net devices under 0000:86:00.0: cvl_0_0 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:18.924 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:18.925 Found net devices under 0000:86:00.1: cvl_0_1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:18.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:29:18.925 00:29:18.925 --- 10.0.0.2 ping statistics --- 00:29:18.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.925 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:18.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:29:18.925 00:29:18.925 --- 10.0.0.1 ping statistics --- 00:29:18.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.925 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=2276555 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 2276555 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2276555 ']' 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:18.925 19:36:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:18.925 [2024-10-17 19:36:41.993058] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:18.925 [2024-10-17 19:36:41.993961] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:29:18.925 [2024-10-17 19:36:41.993995] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.925 [2024-10-17 19:36:42.073017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:18.925 [2024-10-17 19:36:42.114177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.925 [2024-10-17 19:36:42.114218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.925 [2024-10-17 19:36:42.114225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.925 [2024-10-17 19:36:42.114233] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.925 [2024-10-17 19:36:42.114238] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.925 [2024-10-17 19:36:42.115416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.925 [2024-10-17 19:36:42.115418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.925 [2024-10-17 19:36:42.181779] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.925 [2024-10-17 19:36:42.182249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:18.925 [2024-10-17 19:36:42.182476] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 [2024-10-17 19:36:42.884225] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 [2024-10-17 19:36:42.908485] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 NULL1 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 Delay0 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2276696 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:19.185 19:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:19.444 [2024-10-17 19:36:43.023373] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:21.346 19:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.346 19:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:21.346 19:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 starting I/O failed: -6 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 [2024-10-17 19:36:45.220226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c8930 is same with the state(6) to be set 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Read completed with error (sct=0, sc=8) 00:29:21.605 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 starting I/O failed: -6 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 [2024-10-17 19:36:45.224970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3c800d470 is same with the state(6) to be set 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Write completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:21.606 Read completed with error (sct=0, sc=8) 00:29:22.543 [2024-10-17 19:36:46.201407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c9a70 is same with the state(6) to be set 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 [2024-10-17 19:36:46.223480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c8390 is same with the state(6) to be set 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Write completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.543 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 [2024-10-17 19:36:46.223901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c8750 is same with the state(6) to be set 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 [2024-10-17 19:36:46.227318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3c800d7a0 is same with the state(6) to be set 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Write completed with error (sct=0, sc=8) 00:29:22.544 Read completed with error (sct=0, sc=8) 00:29:22.544 [2024-10-17 19:36:46.227837] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd3c800cfe0 is same with the state(6) to be set 00:29:22.544 Initializing NVMe Controllers 00:29:22.544 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.544 Controller IO queue size 128, less than required. 00:29:22.544 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:22.544 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:22.544 Initialization complete. Launching workers. 00:29:22.544 ======================================================== 00:29:22.544 Latency(us) 00:29:22.544 Device Information : IOPS MiB/s Average min max 00:29:22.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.79 0.08 901461.38 277.34 1005902.55 00:29:22.544 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.80 0.08 910119.79 231.08 1009740.62 00:29:22.544 ======================================================== 00:29:22.544 Total : 330.59 0.16 905751.46 231.08 1009740.62 00:29:22.544 00:29:22.544 [2024-10-17 19:36:46.228341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c9a70 (9): Bad file descriptor 00:29:22.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:22.544 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:22.544 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:22.544 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2276696 00:29:22.544 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2276696 00:29:23.112 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2276696) - No such process 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2276696 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2276696 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2276696 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:23.112 [2024-10-17 19:36:46.756481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2277375 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:23.112 19:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:23.112 [2024-10-17 19:36:46.838931] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:23.678 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:23.678 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:23.678 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:24.244 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:24.244 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:24.244 19:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:24.503 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:24.503 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:24.503 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:25.070 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:25.070 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:25.070 19:36:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:25.637 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:25.637 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:25.637 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:26.204 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:26.204 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:26.204 19:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:26.204 Initializing NVMe Controllers 00:29:26.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.204 Controller IO queue size 128, less than required. 00:29:26.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:26.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:26.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:26.205 Initialization complete. Launching workers. 00:29:26.205 ======================================================== 00:29:26.205 Latency(us) 00:29:26.205 Device Information : IOPS MiB/s Average min max 00:29:26.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002485.19 1000148.92 1010499.61 00:29:26.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004425.93 1000142.50 1041303.27 00:29:26.205 ======================================================== 00:29:26.205 Total : 256.00 0.12 1003455.56 1000142.50 1041303.27 00:29:26.205 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2277375 00:29:26.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2277375) - No such process 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2277375 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:26.772 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:26.772 rmmod nvme_tcp 00:29:26.772 rmmod nvme_fabrics 00:29:26.772 rmmod nvme_keyring 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 2276555 ']' 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 2276555 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2276555 ']' 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2276555 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2276555 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2276555' 00:29:26.773 killing process with pid 2276555 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2276555 00:29:26.773 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2276555 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.032 19:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.939 00:29:28.939 real 0m16.760s 00:29:28.939 user 0m26.438s 00:29:28.939 sys 0m6.081s 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:28.939 ************************************ 00:29:28.939 END TEST nvmf_delete_subsystem 00:29:28.939 ************************************ 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.939 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.199 ************************************ 00:29:29.199 START TEST nvmf_host_management 00:29:29.199 ************************************ 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:29.199 * Looking for test storage... 00:29:29.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.199 --rc genhtml_branch_coverage=1 00:29:29.199 --rc genhtml_function_coverage=1 00:29:29.199 --rc genhtml_legend=1 00:29:29.199 --rc geninfo_all_blocks=1 00:29:29.199 --rc geninfo_unexecuted_blocks=1 00:29:29.199 00:29:29.199 ' 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.199 --rc genhtml_branch_coverage=1 00:29:29.199 --rc genhtml_function_coverage=1 00:29:29.199 --rc genhtml_legend=1 00:29:29.199 --rc geninfo_all_blocks=1 00:29:29.199 --rc geninfo_unexecuted_blocks=1 00:29:29.199 00:29:29.199 ' 00:29:29.199 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:29.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.200 --rc genhtml_branch_coverage=1 00:29:29.200 --rc genhtml_function_coverage=1 00:29:29.200 --rc genhtml_legend=1 00:29:29.200 --rc geninfo_all_blocks=1 00:29:29.200 --rc geninfo_unexecuted_blocks=1 00:29:29.200 00:29:29.200 ' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:29.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.200 --rc genhtml_branch_coverage=1 00:29:29.200 --rc genhtml_function_coverage=1 00:29:29.200 --rc genhtml_legend=1 00:29:29.200 --rc geninfo_all_blocks=1 00:29:29.200 --rc geninfo_unexecuted_blocks=1 00:29:29.200 00:29:29.200 ' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.200 19:36:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.781 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:35.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:35.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:35.782 Found net devices under 0000:86:00.0: cvl_0_0 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:35.782 Found net devices under 0000:86:00.1: cvl_0_1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:29:35.782 00:29:35.782 --- 10.0.0.2 ping statistics --- 00:29:35.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.782 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:29:35.782 00:29:35.782 --- 10.0.0.1 ping statistics --- 00:29:35.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.782 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=2281371 00:29:35.782 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 2281371 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2281371 ']' 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.783 19:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 [2024-10-17 19:36:58.907368] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:35.783 [2024-10-17 19:36:58.908293] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:29:35.783 [2024-10-17 19:36:58.908325] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.783 [2024-10-17 19:36:58.987362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:35.783 [2024-10-17 19:36:59.031823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.783 [2024-10-17 19:36:59.031858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.783 [2024-10-17 19:36:59.031865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.783 [2024-10-17 19:36:59.031871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.783 [2024-10-17 19:36:59.031876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.783 [2024-10-17 19:36:59.033354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.783 [2024-10-17 19:36:59.033384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.783 [2024-10-17 19:36:59.033516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.783 [2024-10-17 19:36:59.033517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:35.783 [2024-10-17 19:36:59.099675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:35.783 [2024-10-17 19:36:59.100102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:35.783 [2024-10-17 19:36:59.100503] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:35.783 [2024-10-17 19:36:59.100799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:35.783 [2024-10-17 19:36:59.100853] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 [2024-10-17 19:36:59.166315] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 Malloc0 00:29:35.783 [2024-10-17 19:36:59.258579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2281530 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2281530 /var/tmp/bdevperf.sock 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2281530 ']' 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:35.783 { 00:29:35.783 "params": { 00:29:35.783 "name": "Nvme$subsystem", 00:29:35.783 "trtype": "$TEST_TRANSPORT", 00:29:35.783 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.783 "adrfam": "ipv4", 00:29:35.783 "trsvcid": "$NVMF_PORT", 00:29:35.783 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.783 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.783 "hdgst": ${hdgst:-false}, 00:29:35.783 "ddgst": ${ddgst:-false} 00:29:35.783 }, 00:29:35.783 "method": "bdev_nvme_attach_controller" 00:29:35.783 } 00:29:35.783 EOF 00:29:35.783 )") 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:29:35.783 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:35.783 "params": { 00:29:35.783 "name": "Nvme0", 00:29:35.783 "trtype": "tcp", 00:29:35.783 "traddr": "10.0.0.2", 00:29:35.783 "adrfam": "ipv4", 00:29:35.783 "trsvcid": "4420", 00:29:35.783 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.783 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.783 "hdgst": false, 00:29:35.783 "ddgst": false 00:29:35.783 }, 00:29:35.783 "method": "bdev_nvme_attach_controller" 00:29:35.783 }' 00:29:35.783 [2024-10-17 19:36:59.353303] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:29:35.783 [2024-10-17 19:36:59.353350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281530 ] 00:29:35.783 [2024-10-17 19:36:59.432247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.783 [2024-10-17 19:36:59.473173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.046 Running I/O for 10 seconds... 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.046 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=111 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 111 -ge 100 ']' 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.330 [2024-10-17 19:36:59.854036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.854072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.854081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.854087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.854093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.854099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b1f60 is same with the state(6) to be set 00:29:36.330 [2024-10-17 19:36:59.858667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 [2024-10-17 19:36:59.858815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.330 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.330 [2024-10-17 19:36:59.858829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.330 [2024-10-17 19:36:59.858841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.858990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.858997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:36.331 [2024-10-17 19:36:59.859229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.331 [2024-10-17 19:36:59.859427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.331 [2024-10-17 19:36:59.859435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:36.332 [2024-10-17 19:36:59.859546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.332 [2024-10-17 19:36:59.859640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:36.332 [2024-10-17 19:36:59.859702] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x85b850 was disconnected and freed. reset controller. 00:29:36.332 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:36.332 [2024-10-17 19:36:59.860571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:36.332 task offset: 24576 on job bdev=Nvme0n1 fails 00:29:36.332 00:29:36.332 Latency(us) 00:29:36.332 [2024-10-17T17:37:00.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.332 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:36.332 Job: Nvme0n1 ended in about 0.11 seconds with error 00:29:36.332 Verification LBA range: start 0x0 length 0x400 00:29:36.332 Nvme0n1 : 0.11 1776.36 111.02 592.12 0.00 24905.07 1302.92 27462.70 00:29:36.332 [2024-10-17T17:37:00.116Z] =================================================================================================================== 00:29:36.332 [2024-10-17T17:37:00.116Z] Total : 1776.36 111.02 592.12 0.00 24905.07 1302.92 27462.70 00:29:36.332 [2024-10-17 19:36:59.862926] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:36.332 [2024-10-17 19:36:59.862950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x642600 (9): Bad file descriptor 00:29:36.332 [2024-10-17 19:36:59.865809] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:36.332 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:36.332 19:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2281530 00:29:37.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2281530) - No such process 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:29:37.327 { 00:29:37.327 "params": { 00:29:37.327 "name": "Nvme$subsystem", 00:29:37.327 "trtype": "$TEST_TRANSPORT", 00:29:37.327 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:37.327 "adrfam": "ipv4", 00:29:37.327 "trsvcid": "$NVMF_PORT", 00:29:37.327 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:37.327 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:37.327 "hdgst": ${hdgst:-false}, 00:29:37.327 "ddgst": ${ddgst:-false} 00:29:37.327 }, 00:29:37.327 "method": "bdev_nvme_attach_controller" 00:29:37.327 } 00:29:37.327 EOF 00:29:37.327 )") 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:29:37.327 19:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:29:37.327 "params": { 00:29:37.327 "name": "Nvme0", 00:29:37.327 "trtype": "tcp", 00:29:37.327 "traddr": "10.0.0.2", 00:29:37.327 "adrfam": "ipv4", 00:29:37.327 "trsvcid": "4420", 00:29:37.327 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:37.327 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:37.327 "hdgst": false, 00:29:37.327 "ddgst": false 00:29:37.327 }, 00:29:37.327 "method": "bdev_nvme_attach_controller" 00:29:37.327 }' 00:29:37.327 [2024-10-17 19:37:00.924948] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:29:37.327 [2024-10-17 19:37:00.924995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281841 ] 00:29:37.327 [2024-10-17 19:37:01.000308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.327 [2024-10-17 19:37:01.040871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.586 Running I/O for 1 seconds... 00:29:38.963 1984.00 IOPS, 124.00 MiB/s 00:29:38.964 Latency(us) 00:29:38.964 [2024-10-17T17:37:02.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.964 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:38.964 Verification LBA range: start 0x0 length 0x400 00:29:38.964 Nvme0n1 : 1.02 1998.38 124.90 0.00 0.00 31538.32 6522.39 27213.04 00:29:38.964 [2024-10-17T17:37:02.748Z] =================================================================================================================== 00:29:38.964 [2024-10-17T17:37:02.748Z] Total : 1998.38 124.90 0.00 0.00 31538.32 6522.39 27213.04 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.964 rmmod nvme_tcp 00:29:38.964 rmmod nvme_fabrics 00:29:38.964 rmmod nvme_keyring 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 2281371 ']' 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 2281371 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2281371 ']' 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2281371 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2281371 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2281371' 00:29:38.964 killing process with pid 2281371 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2281371 00:29:38.964 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2281371 00:29:39.223 [2024-10-17 19:37:02.775969] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.223 19:37:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:41.129 00:29:41.129 real 0m12.136s 00:29:41.129 user 0m17.042s 00:29:41.129 sys 0m6.145s 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:41.129 ************************************ 00:29:41.129 END TEST nvmf_host_management 00:29:41.129 ************************************ 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:41.129 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:41.389 ************************************ 00:29:41.389 START TEST nvmf_lvol 00:29:41.389 ************************************ 00:29:41.389 19:37:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:41.389 * Looking for test storage... 00:29:41.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:41.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.389 --rc genhtml_branch_coverage=1 00:29:41.389 --rc genhtml_function_coverage=1 00:29:41.389 --rc genhtml_legend=1 00:29:41.389 --rc geninfo_all_blocks=1 00:29:41.389 --rc geninfo_unexecuted_blocks=1 00:29:41.389 00:29:41.389 ' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:41.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.389 --rc genhtml_branch_coverage=1 00:29:41.389 --rc genhtml_function_coverage=1 00:29:41.389 --rc genhtml_legend=1 00:29:41.389 --rc geninfo_all_blocks=1 00:29:41.389 --rc geninfo_unexecuted_blocks=1 00:29:41.389 00:29:41.389 ' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:41.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.389 --rc genhtml_branch_coverage=1 00:29:41.389 --rc genhtml_function_coverage=1 00:29:41.389 --rc genhtml_legend=1 00:29:41.389 --rc geninfo_all_blocks=1 00:29:41.389 --rc geninfo_unexecuted_blocks=1 00:29:41.389 00:29:41.389 ' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:41.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.389 --rc genhtml_branch_coverage=1 00:29:41.389 --rc genhtml_function_coverage=1 00:29:41.389 --rc genhtml_legend=1 00:29:41.389 --rc geninfo_all_blocks=1 00:29:41.389 --rc geninfo_unexecuted_blocks=1 00:29:41.389 00:29:41.389 ' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.389 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.390 19:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:47.961 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:47.962 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:47.962 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:47.962 Found net devices under 0000:86:00.0: cvl_0_0 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:47.962 Found net devices under 0000:86:00.1: cvl_0_1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.962 19:37:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:29:47.962 00:29:47.962 --- 10.0.0.2 ping statistics --- 00:29:47.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.962 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:29:47.962 00:29:47.962 --- 10.0.0.1 ping statistics --- 00:29:47.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.962 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=2285575 00:29:47.962 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 2285575 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2285575 ']' 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:47.963 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:47.963 [2024-10-17 19:37:11.111104] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.963 [2024-10-17 19:37:11.112053] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:29:47.963 [2024-10-17 19:37:11.112087] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.963 [2024-10-17 19:37:11.194646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.963 [2024-10-17 19:37:11.235607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.963 [2024-10-17 19:37:11.235642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.963 [2024-10-17 19:37:11.235650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.963 [2024-10-17 19:37:11.235655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.963 [2024-10-17 19:37:11.235660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.963 [2024-10-17 19:37:11.237021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.963 [2024-10-17 19:37:11.237136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.963 [2024-10-17 19:37:11.237137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.963 [2024-10-17 19:37:11.303251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.963 [2024-10-17 19:37:11.303932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:47.963 [2024-10-17 19:37:11.304210] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.963 [2024-10-17 19:37:11.304303] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.222 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.222 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:29:48.222 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:29:48.222 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.222 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:48.223 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.223 19:37:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.481 [2024-10-17 19:37:12.145927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.481 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.740 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:48.740 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:48.999 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:48.999 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:49.258 19:37:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:49.258 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=572c22c6-8194-4bda-a1e9-b40c3d13bd03 00:29:49.258 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 572c22c6-8194-4bda-a1e9-b40c3d13bd03 lvol 20 00:29:49.516 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=36a9c88b-5228-4dee-b9b2-3bcaf14b45d3 00:29:49.516 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:49.774 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36a9c88b-5228-4dee-b9b2-3bcaf14b45d3 00:29:49.774 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.032 [2024-10-17 19:37:13.729820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.032 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.290 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:50.290 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2286061 00:29:50.290 19:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:51.225 19:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 36a9c88b-5228-4dee-b9b2-3bcaf14b45d3 MY_SNAPSHOT 00:29:51.484 19:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5c344fd0-974a-40e5-87b1-9a50e925dc23 00:29:51.484 19:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 36a9c88b-5228-4dee-b9b2-3bcaf14b45d3 30 00:29:51.742 19:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5c344fd0-974a-40e5-87b1-9a50e925dc23 MY_CLONE 00:29:52.000 19:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4f07c407-c57f-4ba0-8d87-9292e37b8749 00:29:52.000 19:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4f07c407-c57f-4ba0-8d87-9292e37b8749 00:29:52.567 19:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2286061 00:30:00.686 Initializing NVMe Controllers 00:30:00.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:00.686 Controller IO queue size 128, less than required. 00:30:00.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:00.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:00.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:00.686 Initialization complete. Launching workers. 00:30:00.686 ======================================================== 00:30:00.686 Latency(us) 00:30:00.686 Device Information : IOPS MiB/s Average min max 00:30:00.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12484.90 48.77 10254.71 1519.71 60589.60 00:30:00.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12606.80 49.25 10154.51 1541.94 49968.73 00:30:00.686 ======================================================== 00:30:00.686 Total : 25091.70 98.01 10204.36 1519.71 60589.60 00:30:00.686 00:30:00.686 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:00.943 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36a9c88b-5228-4dee-b9b2-3bcaf14b45d3 00:30:00.943 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 572c22c6-8194-4bda-a1e9-b40c3d13bd03 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:01.201 rmmod nvme_tcp 00:30:01.201 rmmod nvme_fabrics 00:30:01.201 rmmod nvme_keyring 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 2285575 ']' 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 2285575 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2285575 ']' 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2285575 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:01.201 19:37:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2285575 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2285575' 00:30:01.460 killing process with pid 2285575 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2285575 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2285575 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.460 19:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:03.999 00:30:03.999 real 0m22.329s 00:30:03.999 user 0m55.369s 00:30:03.999 sys 0m9.928s 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:03.999 ************************************ 00:30:03.999 END TEST nvmf_lvol 00:30:03.999 ************************************ 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:03.999 ************************************ 00:30:03.999 START TEST nvmf_lvs_grow 00:30:03.999 ************************************ 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:03.999 * Looking for test storage... 00:30:03.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.999 --rc genhtml_branch_coverage=1 00:30:03.999 --rc genhtml_function_coverage=1 00:30:03.999 --rc genhtml_legend=1 00:30:03.999 --rc geninfo_all_blocks=1 00:30:03.999 --rc geninfo_unexecuted_blocks=1 00:30:03.999 00:30:03.999 ' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.999 --rc genhtml_branch_coverage=1 00:30:03.999 --rc genhtml_function_coverage=1 00:30:03.999 --rc genhtml_legend=1 00:30:03.999 --rc geninfo_all_blocks=1 00:30:03.999 --rc geninfo_unexecuted_blocks=1 00:30:03.999 00:30:03.999 ' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.999 --rc genhtml_branch_coverage=1 00:30:03.999 --rc genhtml_function_coverage=1 00:30:03.999 --rc genhtml_legend=1 00:30:03.999 --rc geninfo_all_blocks=1 00:30:03.999 --rc geninfo_unexecuted_blocks=1 00:30:03.999 00:30:03.999 ' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:03.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.999 --rc genhtml_branch_coverage=1 00:30:03.999 --rc genhtml_function_coverage=1 00:30:03.999 --rc genhtml_legend=1 00:30:03.999 --rc geninfo_all_blocks=1 00:30:03.999 --rc geninfo_unexecuted_blocks=1 00:30:03.999 00:30:03.999 ' 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:03.999 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:04.000 19:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:10.570 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:10.570 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:10.570 Found net devices under 0000:86:00.0: cvl_0_0 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:10.570 Found net devices under 0000:86:00.1: cvl_0_1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:10.570 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:10.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:10.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:30:10.571 00:30:10.571 --- 10.0.0.2 ping statistics --- 00:30:10.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.571 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:10.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:10.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:30:10.571 00:30:10.571 --- 10.0.0.1 ping statistics --- 00:30:10.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:10.571 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=2291272 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 2291272 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2291272 ']' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.571 [2024-10-17 19:37:33.411789] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:10.571 [2024-10-17 19:37:33.412686] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:10.571 [2024-10-17 19:37:33.412718] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:10.571 [2024-10-17 19:37:33.491450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.571 [2024-10-17 19:37:33.532328] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.571 [2024-10-17 19:37:33.532362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.571 [2024-10-17 19:37:33.532369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:10.571 [2024-10-17 19:37:33.532375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:10.571 [2024-10-17 19:37:33.532380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.571 [2024-10-17 19:37:33.532926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.571 [2024-10-17 19:37:33.597950] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:10.571 [2024-10-17 19:37:33.598154] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:10.571 [2024-10-17 19:37:33.829564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:10.571 ************************************ 00:30:10.571 START TEST lvs_grow_clean 00:30:10.571 ************************************ 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:10.571 19:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:10.571 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:10.571 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:10.571 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:10.571 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:10.571 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:10.831 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:10.831 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:10.831 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 lvol 150 00:30:11.091 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b71cac2-ff17-46a3-8147-b246d8d82f15 00:30:11.091 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:11.091 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:11.091 [2024-10-17 19:37:34.869305] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:11.091 [2024-10-17 19:37:34.869438] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:11.091 true 00:30:11.350 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:11.350 19:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:11.350 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:11.350 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:11.609 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b71cac2-ff17-46a3-8147-b246d8d82f15 00:30:11.868 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:11.868 [2024-10-17 19:37:35.609813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.868 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2291757 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2291757 /var/tmp/bdevperf.sock 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2291757 ']' 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:12.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:12.127 19:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:12.127 [2024-10-17 19:37:35.823232] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:12.127 [2024-10-17 19:37:35.823278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2291757 ] 00:30:12.127 [2024-10-17 19:37:35.899057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.386 [2024-10-17 19:37:35.941317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.386 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:12.386 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:30:12.387 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:12.645 Nvme0n1 00:30:12.645 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:12.903 [ 00:30:12.903 { 00:30:12.903 "name": "Nvme0n1", 00:30:12.903 "aliases": [ 00:30:12.903 "3b71cac2-ff17-46a3-8147-b246d8d82f15" 00:30:12.903 ], 00:30:12.903 "product_name": "NVMe disk", 00:30:12.903 "block_size": 4096, 00:30:12.903 "num_blocks": 38912, 00:30:12.903 "uuid": "3b71cac2-ff17-46a3-8147-b246d8d82f15", 00:30:12.903 "numa_id": 1, 00:30:12.903 "assigned_rate_limits": { 00:30:12.903 "rw_ios_per_sec": 0, 00:30:12.903 "rw_mbytes_per_sec": 0, 00:30:12.903 "r_mbytes_per_sec": 0, 00:30:12.903 "w_mbytes_per_sec": 0 00:30:12.903 }, 00:30:12.903 "claimed": false, 00:30:12.903 "zoned": false, 00:30:12.903 "supported_io_types": { 00:30:12.903 "read": true, 00:30:12.903 "write": true, 00:30:12.903 "unmap": true, 00:30:12.903 "flush": true, 00:30:12.903 "reset": true, 00:30:12.903 "nvme_admin": true, 00:30:12.903 "nvme_io": true, 00:30:12.903 "nvme_io_md": false, 00:30:12.903 "write_zeroes": true, 00:30:12.903 "zcopy": false, 00:30:12.903 "get_zone_info": false, 00:30:12.903 "zone_management": false, 00:30:12.903 "zone_append": false, 00:30:12.903 "compare": true, 00:30:12.903 "compare_and_write": true, 00:30:12.903 "abort": true, 00:30:12.903 "seek_hole": false, 00:30:12.903 "seek_data": false, 00:30:12.903 "copy": true, 00:30:12.903 "nvme_iov_md": false 00:30:12.903 }, 00:30:12.903 "memory_domains": [ 00:30:12.903 { 00:30:12.903 "dma_device_id": "system", 00:30:12.903 "dma_device_type": 1 00:30:12.903 } 00:30:12.903 ], 00:30:12.903 "driver_specific": { 00:30:12.903 "nvme": [ 00:30:12.903 { 00:30:12.903 "trid": { 00:30:12.903 "trtype": "TCP", 00:30:12.903 "adrfam": "IPv4", 00:30:12.903 "traddr": "10.0.0.2", 00:30:12.903 "trsvcid": "4420", 00:30:12.903 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:12.903 }, 00:30:12.903 "ctrlr_data": { 00:30:12.903 "cntlid": 1, 00:30:12.903 "vendor_id": "0x8086", 00:30:12.903 "model_number": "SPDK bdev Controller", 00:30:12.903 "serial_number": "SPDK0", 00:30:12.904 "firmware_revision": "25.01", 00:30:12.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.904 "oacs": { 00:30:12.904 "security": 0, 00:30:12.904 "format": 0, 00:30:12.904 "firmware": 0, 00:30:12.904 "ns_manage": 0 00:30:12.904 }, 00:30:12.904 "multi_ctrlr": true, 00:30:12.904 "ana_reporting": false 00:30:12.904 }, 00:30:12.904 "vs": { 00:30:12.904 "nvme_version": "1.3" 00:30:12.904 }, 00:30:12.904 "ns_data": { 00:30:12.904 "id": 1, 00:30:12.904 "can_share": true 00:30:12.904 } 00:30:12.904 } 00:30:12.904 ], 00:30:12.904 "mp_policy": "active_passive" 00:30:12.904 } 00:30:12.904 } 00:30:12.904 ] 00:30:12.904 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2291778 00:30:12.904 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:12.904 19:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:12.904 Running I/O for 10 seconds... 00:30:13.839 Latency(us) 00:30:13.839 [2024-10-17T17:37:37.623Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.839 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.839 Nvme0n1 : 1.00 22628.00 88.39 0.00 0.00 0.00 0.00 0.00 00:30:13.839 [2024-10-17T17:37:37.623Z] =================================================================================================================== 00:30:13.839 [2024-10-17T17:37:37.623Z] Total : 22628.00 88.39 0.00 0.00 0.00 0.00 0.00 00:30:13.839 00:30:14.774 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:15.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.033 Nvme0n1 : 2.00 23013.00 89.89 0.00 0.00 0.00 0.00 0.00 00:30:15.033 [2024-10-17T17:37:38.817Z] =================================================================================================================== 00:30:15.033 [2024-10-17T17:37:38.817Z] Total : 23013.00 89.89 0.00 0.00 0.00 0.00 0.00 00:30:15.033 00:30:15.033 true 00:30:15.033 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:15.033 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:15.292 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:15.292 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:15.292 19:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2291778 00:30:15.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.860 Nvme0n1 : 3.00 23092.00 90.20 0.00 0.00 0.00 0.00 0.00 00:30:15.860 [2024-10-17T17:37:39.644Z] =================================================================================================================== 00:30:15.860 [2024-10-17T17:37:39.644Z] Total : 23092.00 90.20 0.00 0.00 0.00 0.00 0.00 00:30:15.860 00:30:17.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.237 Nvme0n1 : 4.00 23146.25 90.42 0.00 0.00 0.00 0.00 0.00 00:30:17.237 [2024-10-17T17:37:41.021Z] =================================================================================================================== 00:30:17.237 [2024-10-17T17:37:41.021Z] Total : 23146.25 90.42 0.00 0.00 0.00 0.00 0.00 00:30:17.237 00:30:18.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.174 Nvme0n1 : 5.00 23227.40 90.73 0.00 0.00 0.00 0.00 0.00 00:30:18.174 [2024-10-17T17:37:41.958Z] =================================================================================================================== 00:30:18.174 [2024-10-17T17:37:41.958Z] Total : 23227.40 90.73 0.00 0.00 0.00 0.00 0.00 00:30:18.174 00:30:19.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.112 Nvme0n1 : 6.00 23305.00 91.04 0.00 0.00 0.00 0.00 0.00 00:30:19.112 [2024-10-17T17:37:42.896Z] =================================================================================================================== 00:30:19.112 [2024-10-17T17:37:42.896Z] Total : 23305.00 91.04 0.00 0.00 0.00 0.00 0.00 00:30:19.112 00:30:20.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.049 Nvme0n1 : 7.00 23344.86 91.19 0.00 0.00 0.00 0.00 0.00 00:30:20.049 [2024-10-17T17:37:43.833Z] =================================================================================================================== 00:30:20.049 [2024-10-17T17:37:43.833Z] Total : 23344.86 91.19 0.00 0.00 0.00 0.00 0.00 00:30:20.049 00:30:20.984 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.984 Nvme0n1 : 8.00 23389.62 91.37 0.00 0.00 0.00 0.00 0.00 00:30:20.984 [2024-10-17T17:37:44.768Z] =================================================================================================================== 00:30:20.984 [2024-10-17T17:37:44.768Z] Total : 23389.62 91.37 0.00 0.00 0.00 0.00 0.00 00:30:20.984 00:30:21.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:21.921 Nvme0n1 : 9.00 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:30:21.921 [2024-10-17T17:37:45.705Z] =================================================================================================================== 00:30:21.921 [2024-10-17T17:37:45.705Z] Total : 23416.00 91.47 0.00 0.00 0.00 0.00 0.00 00:30:21.921 00:30:22.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.858 Nvme0n1 : 10.00 23437.60 91.55 0.00 0.00 0.00 0.00 0.00 00:30:22.858 [2024-10-17T17:37:46.642Z] =================================================================================================================== 00:30:22.858 [2024-10-17T17:37:46.642Z] Total : 23437.60 91.55 0.00 0.00 0.00 0.00 0.00 00:30:22.858 00:30:22.858 00:30:22.858 Latency(us) 00:30:22.858 [2024-10-17T17:37:46.642Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:22.858 Nvme0n1 : 10.00 23434.46 91.54 0.00 0.00 5458.82 3229.99 27462.70 00:30:22.858 [2024-10-17T17:37:46.642Z] =================================================================================================================== 00:30:22.858 [2024-10-17T17:37:46.642Z] Total : 23434.46 91.54 0.00 0.00 5458.82 3229.99 27462.70 00:30:22.858 { 00:30:22.858 "results": [ 00:30:22.858 { 00:30:22.858 "job": "Nvme0n1", 00:30:22.858 "core_mask": "0x2", 00:30:22.858 "workload": "randwrite", 00:30:22.858 "status": "finished", 00:30:22.858 "queue_depth": 128, 00:30:22.858 "io_size": 4096, 00:30:22.858 "runtime": 10.004069, 00:30:22.858 "iops": 23434.46451638828, 00:30:22.858 "mibps": 91.54087701714172, 00:30:22.858 "io_failed": 0, 00:30:22.858 "io_timeout": 0, 00:30:22.858 "avg_latency_us": 5458.8169827674465, 00:30:22.858 "min_latency_us": 3229.9885714285715, 00:30:22.858 "max_latency_us": 27462.704761904763 00:30:22.858 } 00:30:22.858 ], 00:30:22.858 "core_count": 1 00:30:22.858 } 00:30:22.858 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2291757 00:30:22.858 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2291757 ']' 00:30:22.858 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2291757 00:30:22.858 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2291757 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2291757' 00:30:23.117 killing process with pid 2291757 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2291757 00:30:23.117 Received shutdown signal, test time was about 10.000000 seconds 00:30:23.117 00:30:23.117 Latency(us) 00:30:23.117 [2024-10-17T17:37:46.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:23.117 [2024-10-17T17:37:46.901Z] =================================================================================================================== 00:30:23.117 [2024-10-17T17:37:46.901Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2291757 00:30:23.117 19:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.376 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.634 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:23.634 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.892 [2024-10-17 19:37:47.601375] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:23.892 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:24.150 request: 00:30:24.150 { 00:30:24.150 "uuid": "ba5d23f1-8f55-49f0-a98b-d39c73ad5a59", 00:30:24.150 "method": "bdev_lvol_get_lvstores", 00:30:24.150 "req_id": 1 00:30:24.150 } 00:30:24.150 Got JSON-RPC error response 00:30:24.150 response: 00:30:24.150 { 00:30:24.150 "code": -19, 00:30:24.150 "message": "No such device" 00:30:24.150 } 00:30:24.150 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:30:24.150 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:24.150 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:24.150 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:24.150 19:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:24.409 aio_bdev 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b71cac2-ff17-46a3-8147-b246d8d82f15 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=3b71cac2-ff17-46a3-8147-b246d8d82f15 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:24.409 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:24.668 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b71cac2-ff17-46a3-8147-b246d8d82f15 -t 2000 00:30:24.669 [ 00:30:24.669 { 00:30:24.669 "name": "3b71cac2-ff17-46a3-8147-b246d8d82f15", 00:30:24.669 "aliases": [ 00:30:24.669 "lvs/lvol" 00:30:24.669 ], 00:30:24.669 "product_name": "Logical Volume", 00:30:24.669 "block_size": 4096, 00:30:24.669 "num_blocks": 38912, 00:30:24.669 "uuid": "3b71cac2-ff17-46a3-8147-b246d8d82f15", 00:30:24.669 "assigned_rate_limits": { 00:30:24.669 "rw_ios_per_sec": 0, 00:30:24.669 "rw_mbytes_per_sec": 0, 00:30:24.669 "r_mbytes_per_sec": 0, 00:30:24.669 "w_mbytes_per_sec": 0 00:30:24.669 }, 00:30:24.669 "claimed": false, 00:30:24.669 "zoned": false, 00:30:24.669 "supported_io_types": { 00:30:24.669 "read": true, 00:30:24.669 "write": true, 00:30:24.669 "unmap": true, 00:30:24.669 "flush": false, 00:30:24.669 "reset": true, 00:30:24.669 "nvme_admin": false, 00:30:24.669 "nvme_io": false, 00:30:24.669 "nvme_io_md": false, 00:30:24.669 "write_zeroes": true, 00:30:24.669 "zcopy": false, 00:30:24.669 "get_zone_info": false, 00:30:24.669 "zone_management": false, 00:30:24.669 "zone_append": false, 00:30:24.669 "compare": false, 00:30:24.669 "compare_and_write": false, 00:30:24.669 "abort": false, 00:30:24.669 "seek_hole": true, 00:30:24.669 "seek_data": true, 00:30:24.669 "copy": false, 00:30:24.669 "nvme_iov_md": false 00:30:24.669 }, 00:30:24.669 "driver_specific": { 00:30:24.669 "lvol": { 00:30:24.669 "lvol_store_uuid": "ba5d23f1-8f55-49f0-a98b-d39c73ad5a59", 00:30:24.669 "base_bdev": "aio_bdev", 00:30:24.669 "thin_provision": false, 00:30:24.669 "num_allocated_clusters": 38, 00:30:24.669 "snapshot": false, 00:30:24.669 "clone": false, 00:30:24.669 "esnap_clone": false 00:30:24.669 } 00:30:24.669 } 00:30:24.669 } 00:30:24.669 ] 00:30:24.669 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:30:24.669 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:24.669 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:24.928 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:24.928 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:24.928 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:25.187 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:25.187 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b71cac2-ff17-46a3-8147-b246d8d82f15 00:30:25.447 19:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba5d23f1-8f55-49f0-a98b-d39c73ad5a59 00:30:25.447 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.706 00:30:25.706 real 0m15.497s 00:30:25.706 user 0m15.064s 00:30:25.706 sys 0m1.437s 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:25.706 ************************************ 00:30:25.706 END TEST lvs_grow_clean 00:30:25.706 ************************************ 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:25.706 ************************************ 00:30:25.706 START TEST lvs_grow_dirty 00:30:25.706 ************************************ 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:25.706 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:25.965 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:25.965 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:26.224 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:26.224 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:26.224 19:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c lvol 150 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=84bc0710-a714-4497-93b4-579c042f281d 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:26.483 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:26.742 [2024-10-17 19:37:50.409300] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:26.742 [2024-10-17 19:37:50.409428] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:26.742 true 00:30:26.742 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:26.742 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:27.002 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:27.002 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:27.261 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 84bc0710-a714-4497-93b4-579c042f281d 00:30:27.261 19:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.521 [2024-10-17 19:37:51.145767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.521 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2294155 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2294155 /var/tmp/bdevperf.sock 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2294155 ']' 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:27.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:27.780 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:27.780 [2024-10-17 19:37:51.386126] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:27.780 [2024-10-17 19:37:51.386176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2294155 ] 00:30:27.780 [2024-10-17 19:37:51.460408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.780 [2024-10-17 19:37:51.501777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.040 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:28.040 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:28.040 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:28.371 Nvme0n1 00:30:28.371 19:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:28.657 [ 00:30:28.657 { 00:30:28.657 "name": "Nvme0n1", 00:30:28.657 "aliases": [ 00:30:28.657 "84bc0710-a714-4497-93b4-579c042f281d" 00:30:28.657 ], 00:30:28.657 "product_name": "NVMe disk", 00:30:28.657 "block_size": 4096, 00:30:28.657 "num_blocks": 38912, 00:30:28.657 "uuid": "84bc0710-a714-4497-93b4-579c042f281d", 00:30:28.657 "numa_id": 1, 00:30:28.657 "assigned_rate_limits": { 00:30:28.657 "rw_ios_per_sec": 0, 00:30:28.657 "rw_mbytes_per_sec": 0, 00:30:28.657 "r_mbytes_per_sec": 0, 00:30:28.657 "w_mbytes_per_sec": 0 00:30:28.657 }, 00:30:28.657 "claimed": false, 00:30:28.657 "zoned": false, 00:30:28.657 "supported_io_types": { 00:30:28.657 "read": true, 00:30:28.657 "write": true, 00:30:28.657 "unmap": true, 00:30:28.657 "flush": true, 00:30:28.657 "reset": true, 00:30:28.657 "nvme_admin": true, 00:30:28.657 "nvme_io": true, 00:30:28.657 "nvme_io_md": false, 00:30:28.657 "write_zeroes": true, 00:30:28.657 "zcopy": false, 00:30:28.657 "get_zone_info": false, 00:30:28.657 "zone_management": false, 00:30:28.657 "zone_append": false, 00:30:28.658 "compare": true, 00:30:28.658 "compare_and_write": true, 00:30:28.658 "abort": true, 00:30:28.658 "seek_hole": false, 00:30:28.658 "seek_data": false, 00:30:28.658 "copy": true, 00:30:28.658 "nvme_iov_md": false 00:30:28.658 }, 00:30:28.658 "memory_domains": [ 00:30:28.658 { 00:30:28.658 "dma_device_id": "system", 00:30:28.658 "dma_device_type": 1 00:30:28.658 } 00:30:28.658 ], 00:30:28.658 "driver_specific": { 00:30:28.658 "nvme": [ 00:30:28.658 { 00:30:28.658 "trid": { 00:30:28.658 "trtype": "TCP", 00:30:28.658 "adrfam": "IPv4", 00:30:28.658 "traddr": "10.0.0.2", 00:30:28.658 "trsvcid": "4420", 00:30:28.658 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:28.658 }, 00:30:28.658 "ctrlr_data": { 00:30:28.658 "cntlid": 1, 00:30:28.658 "vendor_id": "0x8086", 00:30:28.658 "model_number": "SPDK bdev Controller", 00:30:28.658 "serial_number": "SPDK0", 00:30:28.658 "firmware_revision": "25.01", 00:30:28.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.658 "oacs": { 00:30:28.658 "security": 0, 00:30:28.658 "format": 0, 00:30:28.658 "firmware": 0, 00:30:28.658 "ns_manage": 0 00:30:28.658 }, 00:30:28.658 "multi_ctrlr": true, 00:30:28.658 "ana_reporting": false 00:30:28.658 }, 00:30:28.658 "vs": { 00:30:28.658 "nvme_version": "1.3" 00:30:28.658 }, 00:30:28.658 "ns_data": { 00:30:28.658 "id": 1, 00:30:28.658 "can_share": true 00:30:28.658 } 00:30:28.658 } 00:30:28.658 ], 00:30:28.658 "mp_policy": "active_passive" 00:30:28.658 } 00:30:28.658 } 00:30:28.658 ] 00:30:28.658 19:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2294361 00:30:28.658 19:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:28.658 19:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:28.658 Running I/O for 10 seconds... 00:30:29.596 Latency(us) 00:30:29.596 [2024-10-17T17:37:53.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.596 Nvme0n1 : 1.00 22590.00 88.24 0.00 0.00 0.00 0.00 0.00 00:30:29.596 [2024-10-17T17:37:53.380Z] =================================================================================================================== 00:30:29.596 [2024-10-17T17:37:53.380Z] Total : 22590.00 88.24 0.00 0.00 0.00 0.00 0.00 00:30:29.596 00:30:30.535 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:30.535 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.535 Nvme0n1 : 2.00 22978.00 89.76 0.00 0.00 0.00 0.00 0.00 00:30:30.535 [2024-10-17T17:37:54.319Z] =================================================================================================================== 00:30:30.535 [2024-10-17T17:37:54.319Z] Total : 22978.00 89.76 0.00 0.00 0.00 0.00 0.00 00:30:30.535 00:30:30.795 true 00:30:30.795 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:30.795 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:30.795 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:30.795 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:30.795 19:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2294361 00:30:31.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.734 Nvme0n1 : 3.00 23064.33 90.10 0.00 0.00 0.00 0.00 0.00 00:30:31.734 [2024-10-17T17:37:55.518Z] =================================================================================================================== 00:30:31.734 [2024-10-17T17:37:55.518Z] Total : 23064.33 90.10 0.00 0.00 0.00 0.00 0.00 00:30:31.734 00:30:32.673 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.673 Nvme0n1 : 4.00 23182.00 90.55 0.00 0.00 0.00 0.00 0.00 00:30:32.673 [2024-10-17T17:37:56.457Z] =================================================================================================================== 00:30:32.673 [2024-10-17T17:37:56.457Z] Total : 23182.00 90.55 0.00 0.00 0.00 0.00 0.00 00:30:32.673 00:30:33.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.612 Nvme0n1 : 5.00 23264.40 90.88 0.00 0.00 0.00 0.00 0.00 00:30:33.612 [2024-10-17T17:37:57.396Z] =================================================================================================================== 00:30:33.612 [2024-10-17T17:37:57.396Z] Total : 23264.40 90.88 0.00 0.00 0.00 0.00 0.00 00:30:33.612 00:30:34.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.549 Nvme0n1 : 6.00 23316.17 91.08 0.00 0.00 0.00 0.00 0.00 00:30:34.549 [2024-10-17T17:37:58.333Z] =================================================================================================================== 00:30:34.549 [2024-10-17T17:37:58.333Z] Total : 23316.17 91.08 0.00 0.00 0.00 0.00 0.00 00:30:34.549 00:30:35.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.927 Nvme0n1 : 7.00 23339.43 91.17 0.00 0.00 0.00 0.00 0.00 00:30:35.927 [2024-10-17T17:37:59.711Z] =================================================================================================================== 00:30:35.927 [2024-10-17T17:37:59.711Z] Total : 23339.43 91.17 0.00 0.00 0.00 0.00 0.00 00:30:35.927 00:30:36.495 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.495 Nvme0n1 : 8.00 23341.38 91.18 0.00 0.00 0.00 0.00 0.00 00:30:36.495 [2024-10-17T17:38:00.279Z] =================================================================================================================== 00:30:36.495 [2024-10-17T17:38:00.279Z] Total : 23341.38 91.18 0.00 0.00 0.00 0.00 0.00 00:30:36.495 00:30:37.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:37.877 Nvme0n1 : 9.00 23356.00 91.23 0.00 0.00 0.00 0.00 0.00 00:30:37.877 [2024-10-17T17:38:01.661Z] =================================================================================================================== 00:30:37.877 [2024-10-17T17:38:01.661Z] Total : 23356.00 91.23 0.00 0.00 0.00 0.00 0.00 00:30:37.877 00:30:38.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.845 Nvme0n1 : 10.00 23367.80 91.28 0.00 0.00 0.00 0.00 0.00 00:30:38.845 [2024-10-17T17:38:02.629Z] =================================================================================================================== 00:30:38.845 [2024-10-17T17:38:02.629Z] Total : 23367.80 91.28 0.00 0.00 0.00 0.00 0.00 00:30:38.845 00:30:38.845 00:30:38.845 Latency(us) 00:30:38.845 [2024-10-17T17:38:02.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:38.845 Nvme0n1 : 10.00 23369.31 91.29 0.00 0.00 5474.36 3214.38 27337.87 00:30:38.845 [2024-10-17T17:38:02.629Z] =================================================================================================================== 00:30:38.845 [2024-10-17T17:38:02.629Z] Total : 23369.31 91.29 0.00 0.00 5474.36 3214.38 27337.87 00:30:38.845 { 00:30:38.845 "results": [ 00:30:38.845 { 00:30:38.845 "job": "Nvme0n1", 00:30:38.845 "core_mask": "0x2", 00:30:38.845 "workload": "randwrite", 00:30:38.845 "status": "finished", 00:30:38.845 "queue_depth": 128, 00:30:38.845 "io_size": 4096, 00:30:38.845 "runtime": 10.004832, 00:30:38.845 "iops": 23369.307950398368, 00:30:38.845 "mibps": 91.28635918124363, 00:30:38.845 "io_failed": 0, 00:30:38.845 "io_timeout": 0, 00:30:38.845 "avg_latency_us": 5474.362246665225, 00:30:38.845 "min_latency_us": 3214.384761904762, 00:30:38.845 "max_latency_us": 27337.874285714286 00:30:38.845 } 00:30:38.845 ], 00:30:38.845 "core_count": 1 00:30:38.845 } 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2294155 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2294155 ']' 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2294155 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:38.845 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2294155 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2294155' 00:30:38.846 killing process with pid 2294155 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2294155 00:30:38.846 Received shutdown signal, test time was about 10.000000 seconds 00:30:38.846 00:30:38.846 Latency(us) 00:30:38.846 [2024-10-17T17:38:02.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:38.846 [2024-10-17T17:38:02.630Z] =================================================================================================================== 00:30:38.846 [2024-10-17T17:38:02.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2294155 00:30:38.846 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:39.105 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.363 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:39.363 19:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:39.364 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:39.364 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:39.364 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2291272 00:30:39.364 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2291272 00:30:39.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2291272 Killed "${NVMF_APP[@]}" "$@" 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=2296148 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 2296148 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2296148 ']' 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:39.623 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.623 [2024-10-17 19:38:03.209997] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:39.623 [2024-10-17 19:38:03.210934] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:39.623 [2024-10-17 19:38:03.210971] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:39.623 [2024-10-17 19:38:03.286975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.623 [2024-10-17 19:38:03.327064] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:39.623 [2024-10-17 19:38:03.327098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:39.623 [2024-10-17 19:38:03.327105] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:39.623 [2024-10-17 19:38:03.327111] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:39.623 [2024-10-17 19:38:03.327116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:39.623 [2024-10-17 19:38:03.327662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.623 [2024-10-17 19:38:03.394003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:39.623 [2024-10-17 19:38:03.394212] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:39.882 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.882 [2024-10-17 19:38:03.633027] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:39.882 [2024-10-17 19:38:03.633220] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:39.882 [2024-10-17 19:38:03.633303] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 84bc0710-a714-4497-93b4-579c042f281d 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=84bc0710-a714-4497-93b4-579c042f281d 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:40.141 19:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 84bc0710-a714-4497-93b4-579c042f281d -t 2000 00:30:40.400 [ 00:30:40.400 { 00:30:40.400 "name": "84bc0710-a714-4497-93b4-579c042f281d", 00:30:40.400 "aliases": [ 00:30:40.400 "lvs/lvol" 00:30:40.400 ], 00:30:40.400 "product_name": "Logical Volume", 00:30:40.400 "block_size": 4096, 00:30:40.400 "num_blocks": 38912, 00:30:40.400 "uuid": "84bc0710-a714-4497-93b4-579c042f281d", 00:30:40.400 "assigned_rate_limits": { 00:30:40.400 "rw_ios_per_sec": 0, 00:30:40.400 "rw_mbytes_per_sec": 0, 00:30:40.400 "r_mbytes_per_sec": 0, 00:30:40.400 "w_mbytes_per_sec": 0 00:30:40.400 }, 00:30:40.400 "claimed": false, 00:30:40.400 "zoned": false, 00:30:40.400 "supported_io_types": { 00:30:40.400 "read": true, 00:30:40.400 "write": true, 00:30:40.400 "unmap": true, 00:30:40.400 "flush": false, 00:30:40.400 "reset": true, 00:30:40.400 "nvme_admin": false, 00:30:40.400 "nvme_io": false, 00:30:40.400 "nvme_io_md": false, 00:30:40.400 "write_zeroes": true, 00:30:40.400 "zcopy": false, 00:30:40.400 "get_zone_info": false, 00:30:40.400 "zone_management": false, 00:30:40.400 "zone_append": false, 00:30:40.400 "compare": false, 00:30:40.400 "compare_and_write": false, 00:30:40.400 "abort": false, 00:30:40.400 "seek_hole": true, 00:30:40.400 "seek_data": true, 00:30:40.400 "copy": false, 00:30:40.400 "nvme_iov_md": false 00:30:40.400 }, 00:30:40.400 "driver_specific": { 00:30:40.400 "lvol": { 00:30:40.400 "lvol_store_uuid": "4df15b17-d67c-4efb-875f-5cb8bc490d8c", 00:30:40.400 "base_bdev": "aio_bdev", 00:30:40.400 "thin_provision": false, 00:30:40.400 "num_allocated_clusters": 38, 00:30:40.400 "snapshot": false, 00:30:40.400 "clone": false, 00:30:40.400 "esnap_clone": false 00:30:40.400 } 00:30:40.400 } 00:30:40.400 } 00:30:40.400 ] 00:30:40.400 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:30:40.400 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:40.400 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:40.659 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:40.659 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:40.659 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:40.659 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:40.659 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:40.918 [2024-10-17 19:38:04.576129] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:40.918 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:41.177 request: 00:30:41.177 { 00:30:41.177 "uuid": "4df15b17-d67c-4efb-875f-5cb8bc490d8c", 00:30:41.177 "method": "bdev_lvol_get_lvstores", 00:30:41.177 "req_id": 1 00:30:41.177 } 00:30:41.177 Got JSON-RPC error response 00:30:41.177 response: 00:30:41.177 { 00:30:41.177 "code": -19, 00:30:41.177 "message": "No such device" 00:30:41.177 } 00:30:41.177 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:30:41.177 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:41.177 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:41.177 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:41.177 19:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:41.437 aio_bdev 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 84bc0710-a714-4497-93b4-579c042f281d 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=84bc0710-a714-4497-93b4-579c042f281d 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:41.437 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 84bc0710-a714-4497-93b4-579c042f281d -t 2000 00:30:41.696 [ 00:30:41.696 { 00:30:41.696 "name": "84bc0710-a714-4497-93b4-579c042f281d", 00:30:41.696 "aliases": [ 00:30:41.696 "lvs/lvol" 00:30:41.696 ], 00:30:41.696 "product_name": "Logical Volume", 00:30:41.696 "block_size": 4096, 00:30:41.696 "num_blocks": 38912, 00:30:41.696 "uuid": "84bc0710-a714-4497-93b4-579c042f281d", 00:30:41.696 "assigned_rate_limits": { 00:30:41.696 "rw_ios_per_sec": 0, 00:30:41.696 "rw_mbytes_per_sec": 0, 00:30:41.696 "r_mbytes_per_sec": 0, 00:30:41.696 "w_mbytes_per_sec": 0 00:30:41.696 }, 00:30:41.696 "claimed": false, 00:30:41.696 "zoned": false, 00:30:41.696 "supported_io_types": { 00:30:41.696 "read": true, 00:30:41.696 "write": true, 00:30:41.696 "unmap": true, 00:30:41.696 "flush": false, 00:30:41.696 "reset": true, 00:30:41.696 "nvme_admin": false, 00:30:41.696 "nvme_io": false, 00:30:41.696 "nvme_io_md": false, 00:30:41.696 "write_zeroes": true, 00:30:41.696 "zcopy": false, 00:30:41.696 "get_zone_info": false, 00:30:41.696 "zone_management": false, 00:30:41.696 "zone_append": false, 00:30:41.696 "compare": false, 00:30:41.696 "compare_and_write": false, 00:30:41.696 "abort": false, 00:30:41.696 "seek_hole": true, 00:30:41.696 "seek_data": true, 00:30:41.696 "copy": false, 00:30:41.696 "nvme_iov_md": false 00:30:41.696 }, 00:30:41.696 "driver_specific": { 00:30:41.696 "lvol": { 00:30:41.696 "lvol_store_uuid": "4df15b17-d67c-4efb-875f-5cb8bc490d8c", 00:30:41.696 "base_bdev": "aio_bdev", 00:30:41.696 "thin_provision": false, 00:30:41.696 "num_allocated_clusters": 38, 00:30:41.696 "snapshot": false, 00:30:41.696 "clone": false, 00:30:41.696 "esnap_clone": false 00:30:41.696 } 00:30:41.696 } 00:30:41.696 } 00:30:41.696 ] 00:30:41.696 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:30:41.696 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:41.696 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:41.955 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:41.955 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:41.955 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:42.214 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:42.214 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 84bc0710-a714-4497-93b4-579c042f281d 00:30:42.214 19:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4df15b17-d67c-4efb-875f-5cb8bc490d8c 00:30:42.473 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:42.731 00:30:42.731 real 0m16.965s 00:30:42.731 user 0m34.310s 00:30:42.731 sys 0m3.830s 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:42.731 ************************************ 00:30:42.731 END TEST lvs_grow_dirty 00:30:42.731 ************************************ 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:42.731 nvmf_trace.0 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:42.731 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:42.731 rmmod nvme_tcp 00:30:42.990 rmmod nvme_fabrics 00:30:42.990 rmmod nvme_keyring 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 2296148 ']' 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2296148 ']' 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2296148' 00:30:42.990 killing process with pid 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2296148 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:42.990 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:43.249 19:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.154 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:45.154 00:30:45.154 real 0m41.501s 00:30:45.154 user 0m51.835s 00:30:45.154 sys 0m10.049s 00:30:45.154 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:45.154 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:45.154 ************************************ 00:30:45.154 END TEST nvmf_lvs_grow 00:30:45.155 ************************************ 00:30:45.155 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:45.155 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:45.155 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.155 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:45.155 ************************************ 00:30:45.155 START TEST nvmf_bdev_io_wait 00:30:45.155 ************************************ 00:30:45.155 19:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:45.414 * Looking for test storage... 00:30:45.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:45.414 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:45.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.415 --rc genhtml_branch_coverage=1 00:30:45.415 --rc genhtml_function_coverage=1 00:30:45.415 --rc genhtml_legend=1 00:30:45.415 --rc geninfo_all_blocks=1 00:30:45.415 --rc geninfo_unexecuted_blocks=1 00:30:45.415 00:30:45.415 ' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:45.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.415 --rc genhtml_branch_coverage=1 00:30:45.415 --rc genhtml_function_coverage=1 00:30:45.415 --rc genhtml_legend=1 00:30:45.415 --rc geninfo_all_blocks=1 00:30:45.415 --rc geninfo_unexecuted_blocks=1 00:30:45.415 00:30:45.415 ' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:45.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.415 --rc genhtml_branch_coverage=1 00:30:45.415 --rc genhtml_function_coverage=1 00:30:45.415 --rc genhtml_legend=1 00:30:45.415 --rc geninfo_all_blocks=1 00:30:45.415 --rc geninfo_unexecuted_blocks=1 00:30:45.415 00:30:45.415 ' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:45.415 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:45.415 --rc genhtml_branch_coverage=1 00:30:45.415 --rc genhtml_function_coverage=1 00:30:45.415 --rc genhtml_legend=1 00:30:45.415 --rc geninfo_all_blocks=1 00:30:45.415 --rc geninfo_unexecuted_blocks=1 00:30:45.415 00:30:45.415 ' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:45.415 19:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.987 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:51.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:51.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:51.988 Found net devices under 0000:86:00.0: cvl_0_0 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:51.988 Found net devices under 0000:86:00.1: cvl_0_1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.988 19:38:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:30:51.988 00:30:51.988 --- 10.0.0.2 ping statistics --- 00:30:51.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.988 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:30:51.988 00:30:51.988 --- 10.0.0.1 ping statistics --- 00:30:51.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.988 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=2300241 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 2300241 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2300241 ']' 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:51.988 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 [2024-10-17 19:38:15.141260] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.989 [2024-10-17 19:38:15.142142] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:51.989 [2024-10-17 19:38:15.142173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.989 [2024-10-17 19:38:15.206176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.989 [2024-10-17 19:38:15.249807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.989 [2024-10-17 19:38:15.249845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.989 [2024-10-17 19:38:15.249854] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.989 [2024-10-17 19:38:15.249860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.989 [2024-10-17 19:38:15.249865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.989 [2024-10-17 19:38:15.254619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.989 [2024-10-17 19:38:15.254681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.989 [2024-10-17 19:38:15.254785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.989 [2024-10-17 19:38:15.254786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.989 [2024-10-17 19:38:15.255044] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 [2024-10-17 19:38:15.415615] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.989 [2024-10-17 19:38:15.416016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.989 [2024-10-17 19:38:15.416088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:51.989 [2024-10-17 19:38:15.416270] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 [2024-10-17 19:38:15.427428] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 Malloc0 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.989 [2024-10-17 19:38:15.495481] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2300269 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2300271 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:51.989 { 00:30:51.989 "params": { 00:30:51.989 "name": "Nvme$subsystem", 00:30:51.989 "trtype": "$TEST_TRANSPORT", 00:30:51.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.989 "adrfam": "ipv4", 00:30:51.989 "trsvcid": "$NVMF_PORT", 00:30:51.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.989 "hdgst": ${hdgst:-false}, 00:30:51.989 "ddgst": ${ddgst:-false} 00:30:51.989 }, 00:30:51.989 "method": "bdev_nvme_attach_controller" 00:30:51.989 } 00:30:51.989 EOF 00:30:51.989 )") 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2300273 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:51.989 { 00:30:51.989 "params": { 00:30:51.989 "name": "Nvme$subsystem", 00:30:51.989 "trtype": "$TEST_TRANSPORT", 00:30:51.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.989 "adrfam": "ipv4", 00:30:51.989 "trsvcid": "$NVMF_PORT", 00:30:51.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.989 "hdgst": ${hdgst:-false}, 00:30:51.989 "ddgst": ${ddgst:-false} 00:30:51.989 }, 00:30:51.989 "method": "bdev_nvme_attach_controller" 00:30:51.989 } 00:30:51.989 EOF 00:30:51.989 )") 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2300276 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:51.989 { 00:30:51.989 "params": { 00:30:51.989 "name": "Nvme$subsystem", 00:30:51.989 "trtype": "$TEST_TRANSPORT", 00:30:51.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.989 "adrfam": "ipv4", 00:30:51.989 "trsvcid": "$NVMF_PORT", 00:30:51.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.989 "hdgst": ${hdgst:-false}, 00:30:51.989 "ddgst": ${ddgst:-false} 00:30:51.989 }, 00:30:51.989 "method": "bdev_nvme_attach_controller" 00:30:51.989 } 00:30:51.989 EOF 00:30:51.989 )") 00:30:51.989 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:30:51.990 { 00:30:51.990 "params": { 00:30:51.990 "name": "Nvme$subsystem", 00:30:51.990 "trtype": "$TEST_TRANSPORT", 00:30:51.990 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:51.990 "adrfam": "ipv4", 00:30:51.990 "trsvcid": "$NVMF_PORT", 00:30:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:51.990 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:51.990 "hdgst": ${hdgst:-false}, 00:30:51.990 "ddgst": ${ddgst:-false} 00:30:51.990 }, 00:30:51.990 "method": "bdev_nvme_attach_controller" 00:30:51.990 } 00:30:51.990 EOF 00:30:51.990 )") 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2300269 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:51.990 "params": { 00:30:51.990 "name": "Nvme1", 00:30:51.990 "trtype": "tcp", 00:30:51.990 "traddr": "10.0.0.2", 00:30:51.990 "adrfam": "ipv4", 00:30:51.990 "trsvcid": "4420", 00:30:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.990 "hdgst": false, 00:30:51.990 "ddgst": false 00:30:51.990 }, 00:30:51.990 "method": "bdev_nvme_attach_controller" 00:30:51.990 }' 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:51.990 "params": { 00:30:51.990 "name": "Nvme1", 00:30:51.990 "trtype": "tcp", 00:30:51.990 "traddr": "10.0.0.2", 00:30:51.990 "adrfam": "ipv4", 00:30:51.990 "trsvcid": "4420", 00:30:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.990 "hdgst": false, 00:30:51.990 "ddgst": false 00:30:51.990 }, 00:30:51.990 "method": "bdev_nvme_attach_controller" 00:30:51.990 }' 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:51.990 "params": { 00:30:51.990 "name": "Nvme1", 00:30:51.990 "trtype": "tcp", 00:30:51.990 "traddr": "10.0.0.2", 00:30:51.990 "adrfam": "ipv4", 00:30:51.990 "trsvcid": "4420", 00:30:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.990 "hdgst": false, 00:30:51.990 "ddgst": false 00:30:51.990 }, 00:30:51.990 "method": "bdev_nvme_attach_controller" 00:30:51.990 }' 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:30:51.990 19:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:30:51.990 "params": { 00:30:51.990 "name": "Nvme1", 00:30:51.990 "trtype": "tcp", 00:30:51.990 "traddr": "10.0.0.2", 00:30:51.990 "adrfam": "ipv4", 00:30:51.990 "trsvcid": "4420", 00:30:51.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:51.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:51.990 "hdgst": false, 00:30:51.990 "ddgst": false 00:30:51.990 }, 00:30:51.990 "method": "bdev_nvme_attach_controller" 00:30:51.990 }' 00:30:51.990 [2024-10-17 19:38:15.547258] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:51.990 [2024-10-17 19:38:15.547303] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:51.990 [2024-10-17 19:38:15.548481] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:51.990 [2024-10-17 19:38:15.548527] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:51.990 [2024-10-17 19:38:15.548791] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:51.990 [2024-10-17 19:38:15.548835] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:51.990 [2024-10-17 19:38:15.552316] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:30:51.990 [2024-10-17 19:38:15.552360] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:51.990 [2024-10-17 19:38:15.711070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.990 [2024-10-17 19:38:15.745377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:52.250 [2024-10-17 19:38:15.798465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.250 [2024-10-17 19:38:15.841314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:52.250 [2024-10-17 19:38:15.893682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.250 [2024-10-17 19:38:15.928515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.250 [2024-10-17 19:38:15.947471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.250 [2024-10-17 19:38:15.968335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:52.508 Running I/O for 1 seconds... 00:30:52.508 Running I/O for 1 seconds... 00:30:52.508 Running I/O for 1 seconds... 00:30:52.508 Running I/O for 1 seconds... 00:30:53.446 13234.00 IOPS, 51.70 MiB/s 00:30:53.446 Latency(us) 00:30:53.446 [2024-10-17T17:38:17.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.446 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:53.446 Nvme1n1 : 1.01 13277.59 51.87 0.00 0.00 9607.26 3323.61 10922.67 00:30:53.446 [2024-10-17T17:38:17.230Z] =================================================================================================================== 00:30:53.446 [2024-10-17T17:38:17.230Z] Total : 13277.59 51.87 0.00 0.00 9607.26 3323.61 10922.67 00:30:53.446 6628.00 IOPS, 25.89 MiB/s 00:30:53.446 Latency(us) 00:30:53.446 [2024-10-17T17:38:17.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.446 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:53.446 Nvme1n1 : 1.02 6642.81 25.95 0.00 0.00 19082.88 1505.77 30583.47 00:30:53.446 [2024-10-17T17:38:17.230Z] =================================================================================================================== 00:30:53.446 [2024-10-17T17:38:17.230Z] Total : 6642.81 25.95 0.00 0.00 19082.88 1505.77 30583.47 00:30:53.446 252248.00 IOPS, 985.34 MiB/s 00:30:53.446 Latency(us) 00:30:53.446 [2024-10-17T17:38:17.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.446 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:53.446 Nvme1n1 : 1.00 251867.87 983.86 0.00 0.00 505.44 225.28 1497.97 00:30:53.446 [2024-10-17T17:38:17.230Z] =================================================================================================================== 00:30:53.446 [2024-10-17T17:38:17.230Z] Total : 251867.87 983.86 0.00 0.00 505.44 225.28 1497.97 00:30:53.446 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2300271 00:30:53.446 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2300273 00:30:53.705 7532.00 IOPS, 29.42 MiB/s 00:30:53.705 Latency(us) 00:30:53.705 [2024-10-17T17:38:17.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:53.705 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:53.705 Nvme1n1 : 1.00 7635.17 29.82 0.00 0.00 16727.09 2964.72 38947.11 00:30:53.705 [2024-10-17T17:38:17.489Z] =================================================================================================================== 00:30:53.705 [2024-10-17T17:38:17.489Z] Total : 7635.17 29.82 0.00 0.00 16727.09 2964.72 38947.11 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2300276 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:53.705 rmmod nvme_tcp 00:30:53.705 rmmod nvme_fabrics 00:30:53.705 rmmod nvme_keyring 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 2300241 ']' 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 2300241 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2300241 ']' 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2300241 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2300241 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2300241' 00:30:53.705 killing process with pid 2300241 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2300241 00:30:53.705 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2300241 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:53.964 19:38:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:56.500 00:30:56.500 real 0m10.783s 00:30:56.500 user 0m14.874s 00:30:56.500 sys 0m6.476s 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:56.500 ************************************ 00:30:56.500 END TEST nvmf_bdev_io_wait 00:30:56.500 ************************************ 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:56.500 ************************************ 00:30:56.500 START TEST nvmf_queue_depth 00:30:56.500 ************************************ 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:56.500 * Looking for test storage... 00:30:56.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:56.500 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:56.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.501 --rc genhtml_branch_coverage=1 00:30:56.501 --rc genhtml_function_coverage=1 00:30:56.501 --rc genhtml_legend=1 00:30:56.501 --rc geninfo_all_blocks=1 00:30:56.501 --rc geninfo_unexecuted_blocks=1 00:30:56.501 00:30:56.501 ' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:56.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.501 --rc genhtml_branch_coverage=1 00:30:56.501 --rc genhtml_function_coverage=1 00:30:56.501 --rc genhtml_legend=1 00:30:56.501 --rc geninfo_all_blocks=1 00:30:56.501 --rc geninfo_unexecuted_blocks=1 00:30:56.501 00:30:56.501 ' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:56.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.501 --rc genhtml_branch_coverage=1 00:30:56.501 --rc genhtml_function_coverage=1 00:30:56.501 --rc genhtml_legend=1 00:30:56.501 --rc geninfo_all_blocks=1 00:30:56.501 --rc geninfo_unexecuted_blocks=1 00:30:56.501 00:30:56.501 ' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:56.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:56.501 --rc genhtml_branch_coverage=1 00:30:56.501 --rc genhtml_function_coverage=1 00:30:56.501 --rc genhtml_legend=1 00:30:56.501 --rc geninfo_all_blocks=1 00:30:56.501 --rc geninfo_unexecuted_blocks=1 00:30:56.501 00:30:56.501 ' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:30:56.501 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:56.502 19:38:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:03.073 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:03.073 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:03.073 Found net devices under 0000:86:00.0: cvl_0_0 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:03.073 Found net devices under 0000:86:00.1: cvl_0_1 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:03.073 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:03.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:03.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:31:03.074 00:31:03.074 --- 10.0.0.2 ping statistics --- 00:31:03.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.074 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:03.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:03.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:31:03.074 00:31:03.074 --- 10.0.0.1 ping statistics --- 00:31:03.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:03.074 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=2304052 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 2304052 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2304052 ']' 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:03.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.074 19:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 [2024-10-17 19:38:25.946393] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:03.074 [2024-10-17 19:38:25.947287] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:31:03.074 [2024-10-17 19:38:25.947321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:03.074 [2024-10-17 19:38:26.026764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.074 [2024-10-17 19:38:26.066798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:03.074 [2024-10-17 19:38:26.066834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:03.074 [2024-10-17 19:38:26.066841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:03.074 [2024-10-17 19:38:26.066850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:03.074 [2024-10-17 19:38:26.066856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:03.074 [2024-10-17 19:38:26.067397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.074 [2024-10-17 19:38:26.132079] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:03.074 [2024-10-17 19:38:26.132292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 [2024-10-17 19:38:26.204106] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 Malloc0 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.074 [2024-10-17 19:38:26.272227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2304076 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2304076 /var/tmp/bdevperf.sock 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2304076 ']' 00:31:03.074 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:03.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.075 [2024-10-17 19:38:26.323781] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:31:03.075 [2024-10-17 19:38:26.323825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2304076 ] 00:31:03.075 [2024-10-17 19:38:26.398325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.075 [2024-10-17 19:38:26.440339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:03.075 NVMe0n1 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.075 19:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:03.075 Running I/O for 10 seconds... 00:31:05.390 12288.00 IOPS, 48.00 MiB/s [2024-10-17T17:38:30.111Z] 12290.00 IOPS, 48.01 MiB/s [2024-10-17T17:38:31.047Z] 12492.00 IOPS, 48.80 MiB/s [2024-10-17T17:38:31.984Z] 12549.75 IOPS, 49.02 MiB/s [2024-10-17T17:38:32.920Z] 12585.80 IOPS, 49.16 MiB/s [2024-10-17T17:38:33.856Z] 12636.83 IOPS, 49.36 MiB/s [2024-10-17T17:38:34.793Z] 12723.14 IOPS, 49.70 MiB/s [2024-10-17T17:38:36.169Z] 12735.25 IOPS, 49.75 MiB/s [2024-10-17T17:38:37.107Z] 12754.56 IOPS, 49.82 MiB/s [2024-10-17T17:38:37.107Z] 12803.60 IOPS, 50.01 MiB/s 00:31:13.323 Latency(us) 00:31:13.323 [2024-10-17T17:38:37.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.323 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:13.323 Verification LBA range: start 0x0 length 0x4000 00:31:13.323 NVMe0n1 : 10.06 12828.18 50.11 0.00 0.00 79579.54 18849.40 49682.53 00:31:13.323 [2024-10-17T17:38:37.107Z] =================================================================================================================== 00:31:13.323 [2024-10-17T17:38:37.107Z] Total : 12828.18 50.11 0.00 0.00 79579.54 18849.40 49682.53 00:31:13.323 { 00:31:13.323 "results": [ 00:31:13.323 { 00:31:13.323 "job": "NVMe0n1", 00:31:13.323 "core_mask": "0x1", 00:31:13.323 "workload": "verify", 00:31:13.323 "status": "finished", 00:31:13.323 "verify_range": { 00:31:13.323 "start": 0, 00:31:13.323 "length": 16384 00:31:13.323 }, 00:31:13.323 "queue_depth": 1024, 00:31:13.323 "io_size": 4096, 00:31:13.323 "runtime": 10.060664, 00:31:13.323 "iops": 12828.17913410089, 00:31:13.323 "mibps": 50.1100747425816, 00:31:13.323 "io_failed": 0, 00:31:13.323 "io_timeout": 0, 00:31:13.323 "avg_latency_us": 79579.53641737692, 00:31:13.323 "min_latency_us": 18849.401904761904, 00:31:13.323 "max_latency_us": 49682.52952380952 00:31:13.323 } 00:31:13.323 ], 00:31:13.323 "core_count": 1 00:31:13.323 } 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2304076 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2304076 ']' 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2304076 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304076 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304076' 00:31:13.323 killing process with pid 2304076 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2304076 00:31:13.323 Received shutdown signal, test time was about 10.000000 seconds 00:31:13.323 00:31:13.323 Latency(us) 00:31:13.323 [2024-10-17T17:38:37.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.323 [2024-10-17T17:38:37.107Z] =================================================================================================================== 00:31:13.323 [2024-10-17T17:38:37.107Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:13.323 19:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2304076 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:13.323 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:13.323 rmmod nvme_tcp 00:31:13.323 rmmod nvme_fabrics 00:31:13.582 rmmod nvme_keyring 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 2304052 ']' 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 2304052 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2304052 ']' 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2304052 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2304052 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2304052' 00:31:13.582 killing process with pid 2304052 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2304052 00:31:13.582 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2304052 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:13.839 19:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:15.744 00:31:15.744 real 0m19.673s 00:31:15.744 user 0m22.743s 00:31:15.744 sys 0m6.219s 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:15.744 ************************************ 00:31:15.744 END TEST nvmf_queue_depth 00:31:15.744 ************************************ 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:15.744 ************************************ 00:31:15.744 START TEST nvmf_target_multipath 00:31:15.744 ************************************ 00:31:15.744 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:16.003 * Looking for test storage... 00:31:16.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:16.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.003 --rc genhtml_branch_coverage=1 00:31:16.003 --rc genhtml_function_coverage=1 00:31:16.003 --rc genhtml_legend=1 00:31:16.003 --rc geninfo_all_blocks=1 00:31:16.003 --rc geninfo_unexecuted_blocks=1 00:31:16.003 00:31:16.003 ' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:16.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.003 --rc genhtml_branch_coverage=1 00:31:16.003 --rc genhtml_function_coverage=1 00:31:16.003 --rc genhtml_legend=1 00:31:16.003 --rc geninfo_all_blocks=1 00:31:16.003 --rc geninfo_unexecuted_blocks=1 00:31:16.003 00:31:16.003 ' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:16.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.003 --rc genhtml_branch_coverage=1 00:31:16.003 --rc genhtml_function_coverage=1 00:31:16.003 --rc genhtml_legend=1 00:31:16.003 --rc geninfo_all_blocks=1 00:31:16.003 --rc geninfo_unexecuted_blocks=1 00:31:16.003 00:31:16.003 ' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:16.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.003 --rc genhtml_branch_coverage=1 00:31:16.003 --rc genhtml_function_coverage=1 00:31:16.003 --rc genhtml_legend=1 00:31:16.003 --rc geninfo_all_blocks=1 00:31:16.003 --rc geninfo_unexecuted_blocks=1 00:31:16.003 00:31:16.003 ' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.003 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:16.004 19:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.576 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:22.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:22.577 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:22.577 Found net devices under 0000:86:00.0: cvl_0_0 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:22.577 Found net devices under 0000:86:00.1: cvl_0_1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.577 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms 00:31:22.577 00:31:22.577 --- 10.0.0.2 ping statistics --- 00:31:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.578 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:22.578 00:31:22.578 --- 10.0.0.1 ping statistics --- 00:31:22.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.578 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:22.578 only one NIC for nvmf test 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:22.578 rmmod nvme_tcp 00:31:22.578 rmmod nvme_fabrics 00:31:22.578 rmmod nvme_keyring 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:22.578 19:38:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:24.651 00:31:24.651 real 0m8.321s 00:31:24.651 user 0m1.851s 00:31:24.651 sys 0m4.471s 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:24.651 ************************************ 00:31:24.651 END TEST nvmf_target_multipath 00:31:24.651 ************************************ 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:24.651 ************************************ 00:31:24.651 START TEST nvmf_zcopy 00:31:24.651 ************************************ 00:31:24.651 19:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:24.651 * Looking for test storage... 00:31:24.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:24.651 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:24.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.652 --rc genhtml_branch_coverage=1 00:31:24.652 --rc genhtml_function_coverage=1 00:31:24.652 --rc genhtml_legend=1 00:31:24.652 --rc geninfo_all_blocks=1 00:31:24.652 --rc geninfo_unexecuted_blocks=1 00:31:24.652 00:31:24.652 ' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:24.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.652 --rc genhtml_branch_coverage=1 00:31:24.652 --rc genhtml_function_coverage=1 00:31:24.652 --rc genhtml_legend=1 00:31:24.652 --rc geninfo_all_blocks=1 00:31:24.652 --rc geninfo_unexecuted_blocks=1 00:31:24.652 00:31:24.652 ' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:24.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.652 --rc genhtml_branch_coverage=1 00:31:24.652 --rc genhtml_function_coverage=1 00:31:24.652 --rc genhtml_legend=1 00:31:24.652 --rc geninfo_all_blocks=1 00:31:24.652 --rc geninfo_unexecuted_blocks=1 00:31:24.652 00:31:24.652 ' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:24.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:24.652 --rc genhtml_branch_coverage=1 00:31:24.652 --rc genhtml_function_coverage=1 00:31:24.652 --rc genhtml_legend=1 00:31:24.652 --rc geninfo_all_blocks=1 00:31:24.652 --rc geninfo_unexecuted_blocks=1 00:31:24.652 00:31:24.652 ' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:24.652 19:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:29.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:29.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:29.942 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:29.943 Found net devices under 0000:86:00.0: cvl_0_0 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:29.943 Found net devices under 0000:86:00.1: cvl_0_1 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:29.943 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.203 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:30.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:31:30.204 00:31:30.204 --- 10.0.0.2 ping statistics --- 00:31:30.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.204 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:31:30.204 00:31:30.204 --- 10.0.0.1 ping statistics --- 00:31:30.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.204 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:31:30.204 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:31:30.463 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:30.463 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:31:30.463 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:30.463 19:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.463 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=2312729 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 2312729 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2312729 ']' 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:30.464 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.464 [2024-10-17 19:38:54.056648] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:30.464 [2024-10-17 19:38:54.057629] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:31:30.464 [2024-10-17 19:38:54.057664] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.464 [2024-10-17 19:38:54.137680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.464 [2024-10-17 19:38:54.177828] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.464 [2024-10-17 19:38:54.177877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.464 [2024-10-17 19:38:54.177885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.464 [2024-10-17 19:38:54.177890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.464 [2024-10-17 19:38:54.177896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.464 [2024-10-17 19:38:54.178447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.464 [2024-10-17 19:38:54.245366] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:30.464 [2024-10-17 19:38:54.245591] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 [2024-10-17 19:38:54.323117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 [2024-10-17 19:38:54.351376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 malloc0 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:30.724 { 00:31:30.724 "params": { 00:31:30.724 "name": "Nvme$subsystem", 00:31:30.724 "trtype": "$TEST_TRANSPORT", 00:31:30.724 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.724 "adrfam": "ipv4", 00:31:30.724 "trsvcid": "$NVMF_PORT", 00:31:30.724 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.724 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.724 "hdgst": ${hdgst:-false}, 00:31:30.724 "ddgst": ${ddgst:-false} 00:31:30.724 }, 00:31:30.724 "method": "bdev_nvme_attach_controller" 00:31:30.724 } 00:31:30.724 EOF 00:31:30.724 )") 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:30.724 19:38:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:30.724 "params": { 00:31:30.724 "name": "Nvme1", 00:31:30.724 "trtype": "tcp", 00:31:30.724 "traddr": "10.0.0.2", 00:31:30.724 "adrfam": "ipv4", 00:31:30.724 "trsvcid": "4420", 00:31:30.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.724 "hdgst": false, 00:31:30.724 "ddgst": false 00:31:30.724 }, 00:31:30.724 "method": "bdev_nvme_attach_controller" 00:31:30.724 }' 00:31:30.724 [2024-10-17 19:38:54.452532] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:31:30.724 [2024-10-17 19:38:54.452582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2312941 ] 00:31:30.983 [2024-10-17 19:38:54.526664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.983 [2024-10-17 19:38:54.567714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.983 Running I/O for 10 seconds... 00:31:33.301 8346.00 IOPS, 65.20 MiB/s [2024-10-17T17:38:58.020Z] 8394.00 IOPS, 65.58 MiB/s [2024-10-17T17:38:58.957Z] 8408.67 IOPS, 65.69 MiB/s [2024-10-17T17:38:59.893Z] 8420.75 IOPS, 65.79 MiB/s [2024-10-17T17:39:00.828Z] 8425.00 IOPS, 65.82 MiB/s [2024-10-17T17:39:02.204Z] 8410.50 IOPS, 65.71 MiB/s [2024-10-17T17:39:03.139Z] 8410.29 IOPS, 65.71 MiB/s [2024-10-17T17:39:04.076Z] 8404.00 IOPS, 65.66 MiB/s [2024-10-17T17:39:05.013Z] 8398.44 IOPS, 65.61 MiB/s [2024-10-17T17:39:05.013Z] 8391.30 IOPS, 65.56 MiB/s 00:31:41.229 Latency(us) 00:31:41.229 [2024-10-17T17:39:05.013Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:41.229 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:41.229 Verification LBA range: start 0x0 length 0x1000 00:31:41.229 Nvme1n1 : 10.05 8361.08 65.32 0.00 0.00 15204.95 2715.06 44439.65 00:31:41.229 [2024-10-17T17:39:05.013Z] =================================================================================================================== 00:31:41.229 [2024-10-17T17:39:05.013Z] Total : 8361.08 65.32 0.00 0.00 15204.95 2715.06 44439.65 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2314692 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:31:41.229 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:31:41.229 { 00:31:41.229 "params": { 00:31:41.229 "name": "Nvme$subsystem", 00:31:41.229 "trtype": "$TEST_TRANSPORT", 00:31:41.229 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.229 "adrfam": "ipv4", 00:31:41.229 "trsvcid": "$NVMF_PORT", 00:31:41.229 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.230 "hdgst": ${hdgst:-false}, 00:31:41.230 "ddgst": ${ddgst:-false} 00:31:41.230 }, 00:31:41.230 "method": "bdev_nvme_attach_controller" 00:31:41.230 } 00:31:41.230 EOF 00:31:41.230 )") 00:31:41.230 19:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:31:41.230 [2024-10-17 19:39:05.002795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.230 [2024-10-17 19:39:05.002840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.230 19:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:31:41.230 19:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:31:41.230 19:39:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:31:41.230 "params": { 00:31:41.230 "name": "Nvme1", 00:31:41.230 "trtype": "tcp", 00:31:41.230 "traddr": "10.0.0.2", 00:31:41.230 "adrfam": "ipv4", 00:31:41.230 "trsvcid": "4420", 00:31:41.230 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.230 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.230 "hdgst": false, 00:31:41.230 "ddgst": false 00:31:41.230 }, 00:31:41.230 "method": "bdev_nvme_attach_controller" 00:31:41.230 }' 00:31:41.489 [2024-10-17 19:39:05.014756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.014769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.026749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.026759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.038749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.038758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.039617] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:31:41.489 [2024-10-17 19:39:05.039668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2314692 ] 00:31:41.489 [2024-10-17 19:39:05.050749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.050761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.062746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.062755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.074749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.074759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.086750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.086760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.098748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.098758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.110748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.110757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.113273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.489 [2024-10-17 19:39:05.122748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.122762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.134749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.134760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.146750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.146762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.156772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.489 [2024-10-17 19:39:05.158748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.158759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.170759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.170776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.182754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.182770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.194758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.194773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.206750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.206762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.489 [2024-10-17 19:39:05.218761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.489 [2024-10-17 19:39:05.218780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.490 [2024-10-17 19:39:05.230750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.490 [2024-10-17 19:39:05.230760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.490 [2024-10-17 19:39:05.242762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.490 [2024-10-17 19:39:05.242784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.490 [2024-10-17 19:39:05.254755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.490 [2024-10-17 19:39:05.254770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.490 [2024-10-17 19:39:05.266756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.490 [2024-10-17 19:39:05.266770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.278750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.278759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.290750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.290760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.302745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.302755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.314753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.314767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.326754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.326769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.377911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.377930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.386754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.386768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 Running I/O for 5 seconds... 00:31:41.749 [2024-10-17 19:39:05.400736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.400755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.416027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.416045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.431114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.431132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.446757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.446776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.458167] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.458185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.471885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.471904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.487072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.487090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.498881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.498899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.511963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.511987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:41.749 [2024-10-17 19:39:05.526591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:41.749 [2024-10-17 19:39:05.526617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.538258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.538277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.552252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.552273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.567343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.567362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.582980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.582998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.598843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.598862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.610198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.610217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.623835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.623856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.639001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.639020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.650177] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.650197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.663520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.663541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.678808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.678828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.691011] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.691029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.704196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.704214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.719634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.719653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.734614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.734633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.748787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.748806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.763466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.763486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.774932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.774954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.009 [2024-10-17 19:39:05.788152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.009 [2024-10-17 19:39:05.788171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.802697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.802717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.813447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.813465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.827975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.827993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.842869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.842888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.854059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.854078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.867555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.867573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.882697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.882715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.894445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.894464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.908378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.268 [2024-10-17 19:39:05.908397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.268 [2024-10-17 19:39:05.922801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.922819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:05.934020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.934039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:05.948047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.948064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:05.962267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.962286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:05.974881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.974899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:05.987421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:05.987439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:06.002262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:06.002281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:06.014769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:06.014788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:06.027618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:06.027642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.269 [2024-10-17 19:39:06.042856] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.269 [2024-10-17 19:39:06.042875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.055726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.055745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.070578] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.070597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.082091] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.082109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.094988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.095005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.106731] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.106748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.119664] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.119682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.134393] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.134412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.145542] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.145560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.159558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.159576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.174070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.174088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.186893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.186911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.198384] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.198402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.211805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.211822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.226783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.226801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.238201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.238219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.252405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.252423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.267122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.267139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.279953] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.279975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.295229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.295247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.528 [2024-10-17 19:39:06.306292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.528 [2024-10-17 19:39:06.306310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.320143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.320161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.334821] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.334839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.345895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.345913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.360375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.360393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.374626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.374644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.386706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.386725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 16411.00 IOPS, 128.21 MiB/s [2024-10-17T17:39:06.572Z] [2024-10-17 19:39:06.399001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.399019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.411545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.411563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.426587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.426611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.439111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.439128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.452398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.452416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.467078] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.467095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.482853] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.482871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.494409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.494427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.507771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.507788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.522758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.522777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.534660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.534678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.545996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.546014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:42.788 [2024-10-17 19:39:06.560517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:42.788 [2024-10-17 19:39:06.560537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.575236] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.575254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.586110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.586127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.600098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.600116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.614806] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.614824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.625917] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.625935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.639714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.639731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.654423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.654440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.666614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.666631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.680335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.680353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.695067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.695085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.710919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.710937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.721687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.721704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.735903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.735920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.750721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.750739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.047 [2024-10-17 19:39:06.761880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.047 [2024-10-17 19:39:06.761898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.048 [2024-10-17 19:39:06.776022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.048 [2024-10-17 19:39:06.776041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.048 [2024-10-17 19:39:06.790633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.048 [2024-10-17 19:39:06.790652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.048 [2024-10-17 19:39:06.801863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.048 [2024-10-17 19:39:06.801881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.048 [2024-10-17 19:39:06.816424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.048 [2024-10-17 19:39:06.816442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.048 [2024-10-17 19:39:06.830935] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.048 [2024-10-17 19:39:06.830954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.842762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.842779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.856073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.856091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.870649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.870668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.881353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.881372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.895735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.895753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.910863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.910882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.922197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.922215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.934797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.934815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.946682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.946702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.960736] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.960754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.975606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.975632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:06.990487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:06.990506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.002008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.002026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.015729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.015747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.030624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.030642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.044244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.044262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.059162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.059180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.070520] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.070540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.307 [2024-10-17 19:39:07.084006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.307 [2024-10-17 19:39:07.084025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.099067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.099087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.112576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.112595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.127499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.127519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.143427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.143447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.158208] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.158227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.576 [2024-10-17 19:39:07.171249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.576 [2024-10-17 19:39:07.171267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.186516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.186534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.197740] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.197758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.211562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.211580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.226881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.226899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.242084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.242104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.255003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.255020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.267166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.267183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.278456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.278475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.292508] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.292531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.307459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.307478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.322286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.322304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.335957] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.335976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.577 [2024-10-17 19:39:07.350797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.577 [2024-10-17 19:39:07.350817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.844 [2024-10-17 19:39:07.362089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.362107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.375943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.375961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.390988] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.391007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 16487.00 IOPS, 128.80 MiB/s [2024-10-17T17:39:07.629Z] [2024-10-17 19:39:07.402378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.402396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.416064] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.416082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.430862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.430881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.442232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.442250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.455580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.455599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.471185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.471203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.483919] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.483937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.498426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.498445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.509805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.509823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.524022] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.524040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.538357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.538375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.551286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.551309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.566657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.566675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.578017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.578035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.591921] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.591940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.606828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.606848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:43.845 [2024-10-17 19:39:07.618261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:43.845 [2024-10-17 19:39:07.618279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.632529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.632547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.647196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.647214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.659548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.659565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.674517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.674534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.686734] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.686751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.698587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.698612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.711891] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.711909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.726697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.726715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.738210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.738228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.752329] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.752347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.766772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.766801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.777028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.777046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.791700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.791718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.806163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.806185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.820561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.820579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.834989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.835007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.849939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.849957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.863280] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.863298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.104 [2024-10-17 19:39:07.878801] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.104 [2024-10-17 19:39:07.878818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.890139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.890157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.903596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.903621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.918587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.918610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.929725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.929743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.943682] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.943700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.957900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.957919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.972193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.972212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.986637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.986656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:07.997704] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:07.997722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.011874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.011892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.026150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.026168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.039773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.039791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.054222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.054240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.067227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.067246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.082848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.082867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.093804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.093823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.107470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.107488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.122082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.122100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.134137] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.134155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.364 [2024-10-17 19:39:08.147374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.364 [2024-10-17 19:39:08.147392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.162299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.162319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.173717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.173735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.188395] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.188414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.202686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.202707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.213652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.213670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.228057] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.228075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.242668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.242686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.254296] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.254314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.268131] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.268161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.279364] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.279382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.291826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.291845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.306449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.306467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.317537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.317556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.332050] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.332069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.623 [2024-10-17 19:39:08.346286] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.623 [2024-10-17 19:39:08.346304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.624 [2024-10-17 19:39:08.358671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.624 [2024-10-17 19:39:08.358691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.624 [2024-10-17 19:39:08.371017] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.624 [2024-10-17 19:39:08.371034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.624 [2024-10-17 19:39:08.383290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.624 [2024-10-17 19:39:08.383308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.624 [2024-10-17 19:39:08.394997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.624 [2024-10-17 19:39:08.395016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.624 16536.67 IOPS, 129.19 MiB/s [2024-10-17T17:39:08.408Z] [2024-10-17 19:39:08.407672] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.624 [2024-10-17 19:39:08.407690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.422909] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.422927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.438632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.438651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.449790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.449808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.464043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.464061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.478880] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.478898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.489671] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.489689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.504207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.504226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.518842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.518872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.530240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.530260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.544018] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.544037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.553990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.554010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.567974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.567995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.582832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.582851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.594202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.594222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.608354] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.608374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.623055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.623074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.638659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.638679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.652502] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.652522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.883 [2024-10-17 19:39:08.667500] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.883 [2024-10-17 19:39:08.667519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.682015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.682034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.696148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.696167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.711142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.711160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.726820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.726840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.738230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.738248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.751872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.751891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.766989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.767007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.782447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.782467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.793798] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.793816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.807397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.807416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.818754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.818778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.832553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.832571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.847596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.847621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.862143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.862161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.876727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.876745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.891232] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.891250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.906838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.906857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.142 [2024-10-17 19:39:08.918200] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.142 [2024-10-17 19:39:08.918219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:08.931722] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:08.931740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:08.947267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:08.947284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:08.962848] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:08.962868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:08.973895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:08.973913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:08.988082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:08.988101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.002981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.003000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.013945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.013964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.027348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.027367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.038923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.038941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.052227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.052245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.066597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.066622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.078654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.078680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.090258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.090276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.104385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.104404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.119108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.119126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.130709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.130727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.144060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.144078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.158478] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.158496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.169698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.169716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.401 [2024-10-17 19:39:09.184275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.401 [2024-10-17 19:39:09.184293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.198905] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.198934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.210762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.210781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.223417] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.223435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.234513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.234531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.248184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.248204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.262984] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.263003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.278497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.278516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.290591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.290617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.303401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.303419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.318284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.318301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.331872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.331895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.346446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.346465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.358295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.358314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.372026] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.372043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.386611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.386630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.397606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.397624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 16530.00 IOPS, 129.14 MiB/s [2024-10-17T17:39:09.444Z] [2024-10-17 19:39:09.411553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.411571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.426833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.426850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.660 [2024-10-17 19:39:09.438378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.660 [2024-10-17 19:39:09.438398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.451920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.451940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.467089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.467106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.483439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.483458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.499084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.499103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.510380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.510399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.524849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.524868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.539083] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.539102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.554440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.554459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.565593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.565617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.580566] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.580583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.595079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.595097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.918 [2024-10-17 19:39:09.607809] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.918 [2024-10-17 19:39:09.607827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.622501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.622519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.633747] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.633765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.647630] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.647648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.662890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.662910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.674120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.674137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.687971] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.687989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.919 [2024-10-17 19:39:09.702735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.919 [2024-10-17 19:39:09.702755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.713904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.177 [2024-10-17 19:39:09.713923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.728071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.177 [2024-10-17 19:39:09.728090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.742587] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.177 [2024-10-17 19:39:09.742613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.754108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.177 [2024-10-17 19:39:09.754126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.767320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.177 [2024-10-17 19:39:09.767338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.177 [2024-10-17 19:39:09.782504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.782523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.794648] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.794667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.807374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.807392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.818489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.818506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.832540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.832558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.846999] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.847017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.862706] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.862724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.873828] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.873846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.887661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.887679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.902377] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.902396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.913694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.913711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.927650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.927667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.942059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.942077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.178 [2024-10-17 19:39:09.954947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.178 [2024-10-17 19:39:09.954964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:09.970686] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:09.970706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:09.982261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:09.982280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:09.996371] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:09.996390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.012318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.012340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.027859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.027879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.043591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.043621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.055117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.055135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.067617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.067636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.078144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.078163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.093800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.093820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.107735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.107755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.123808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.123827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.138334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.138353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.150309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.150327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.164365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.164383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.179470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.179488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.194129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.194148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.207266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.207285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.437 [2024-10-17 19:39:10.218754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.437 [2024-10-17 19:39:10.218774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.232145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.232164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.247266] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.247285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.258892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.258910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.272565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.272585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.287499] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.287518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.303188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.303209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.319559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.319578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.335947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.335966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.352258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.352277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.367969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.695 [2024-10-17 19:39:10.367993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.695 [2024-10-17 19:39:10.384537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.384556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 [2024-10-17 19:39:10.400115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.400134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 16482.60 IOPS, 128.77 MiB/s [2024-10-17T17:39:10.480Z] [2024-10-17 19:39:10.410759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.410778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 00:31:46.696 Latency(us) 00:31:46.696 [2024-10-17T17:39:10.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:46.696 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:46.696 Nvme1n1 : 5.01 16482.93 128.77 0.00 0.00 7757.69 1989.49 14542.75 00:31:46.696 [2024-10-17T17:39:10.480Z] =================================================================================================================== 00:31:46.696 [2024-10-17T17:39:10.480Z] Total : 16482.93 128.77 0.00 0.00 7757.69 1989.49 14542.75 00:31:46.696 [2024-10-17 19:39:10.422752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.422767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 [2024-10-17 19:39:10.434755] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.434769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 [2024-10-17 19:39:10.446758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.446781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 [2024-10-17 19:39:10.458751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.458765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.696 [2024-10-17 19:39:10.470753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.696 [2024-10-17 19:39:10.470766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.482748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.482760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.494753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.494772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.506749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.506763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.518746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.518757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.530749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.530760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.542750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.542762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.554746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.554756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 [2024-10-17 19:39:10.566749] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.955 [2024-10-17 19:39:10.566764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2314692) - No such process 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2314692 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.955 delay0 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:46.955 19:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:46.955 [2024-10-17 19:39:10.713033] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:55.080 Initializing NVMe Controllers 00:31:55.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:55.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:55.080 Initialization complete. Launching workers. 00:31:55.080 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 266, failed: 26751 00:31:55.080 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26912, failed to submit 105 00:31:55.080 success 26794, unsuccessful 118, failed 0 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:55.080 rmmod nvme_tcp 00:31:55.080 rmmod nvme_fabrics 00:31:55.080 rmmod nvme_keyring 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 2312729 ']' 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 2312729 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2312729 ']' 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2312729 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:55.080 19:39:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2312729 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2312729' 00:31:55.080 killing process with pid 2312729 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2312729 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2312729 00:31:55.080 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.081 19:39:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:56.986 00:31:56.986 real 0m32.344s 00:31:56.986 user 0m41.419s 00:31:56.986 sys 0m13.368s 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:56.986 ************************************ 00:31:56.986 END TEST nvmf_zcopy 00:31:56.986 ************************************ 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:56.986 ************************************ 00:31:56.986 START TEST nvmf_nmic 00:31:56.986 ************************************ 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:56.986 * Looking for test storage... 00:31:56.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:31:56.986 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.987 --rc genhtml_branch_coverage=1 00:31:56.987 --rc genhtml_function_coverage=1 00:31:56.987 --rc genhtml_legend=1 00:31:56.987 --rc geninfo_all_blocks=1 00:31:56.987 --rc geninfo_unexecuted_blocks=1 00:31:56.987 00:31:56.987 ' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.987 --rc genhtml_branch_coverage=1 00:31:56.987 --rc genhtml_function_coverage=1 00:31:56.987 --rc genhtml_legend=1 00:31:56.987 --rc geninfo_all_blocks=1 00:31:56.987 --rc geninfo_unexecuted_blocks=1 00:31:56.987 00:31:56.987 ' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.987 --rc genhtml_branch_coverage=1 00:31:56.987 --rc genhtml_function_coverage=1 00:31:56.987 --rc genhtml_legend=1 00:31:56.987 --rc geninfo_all_blocks=1 00:31:56.987 --rc geninfo_unexecuted_blocks=1 00:31:56.987 00:31:56.987 ' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:56.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.987 --rc genhtml_branch_coverage=1 00:31:56.987 --rc genhtml_function_coverage=1 00:31:56.987 --rc genhtml_legend=1 00:31:56.987 --rc geninfo_all_blocks=1 00:31:56.987 --rc geninfo_unexecuted_blocks=1 00:31:56.987 00:31:56.987 ' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.987 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.988 19:39:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:03.556 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:03.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:03.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:03.557 Found net devices under 0000:86:00.0: cvl_0_0 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:03.557 Found net devices under 0000:86:00.1: cvl_0_1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:03.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:03.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:32:03.557 00:32:03.557 --- 10.0.0.2 ping statistics --- 00:32:03.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.557 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:03.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:03.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:32:03.557 00:32:03.557 --- 10.0.0.1 ping statistics --- 00:32:03.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:03.557 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=2320668 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 2320668 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2320668 ']' 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:03.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:03.557 19:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.557 [2024-10-17 19:39:26.555007] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:03.557 [2024-10-17 19:39:26.555898] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:32:03.558 [2024-10-17 19:39:26.555933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:03.558 [2024-10-17 19:39:26.637356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:03.558 [2024-10-17 19:39:26.681579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:03.558 [2024-10-17 19:39:26.681621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:03.558 [2024-10-17 19:39:26.681629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:03.558 [2024-10-17 19:39:26.681634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:03.558 [2024-10-17 19:39:26.681639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:03.558 [2024-10-17 19:39:26.683212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.558 [2024-10-17 19:39:26.683231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:03.558 [2024-10-17 19:39:26.683318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.558 [2024-10-17 19:39:26.683319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:03.558 [2024-10-17 19:39:26.751329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:03.558 [2024-10-17 19:39:26.751433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:03.558 [2024-10-17 19:39:26.752165] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:03.558 [2024-10-17 19:39:26.752169] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:03.558 [2024-10-17 19:39:26.752269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 [2024-10-17 19:39:27.428167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 Malloc0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 [2024-10-17 19:39:27.508450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:03.816 test case1: single bdev can't be used in multiple subsystems 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 [2024-10-17 19:39:27.539847] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:03.816 [2024-10-17 19:39:27.539880] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:03.816 [2024-10-17 19:39:27.539888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:03.816 request: 00:32:03.816 { 00:32:03.816 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:03.816 "namespace": { 00:32:03.816 "bdev_name": "Malloc0", 00:32:03.816 "no_auto_visible": false 00:32:03.816 }, 00:32:03.816 "method": "nvmf_subsystem_add_ns", 00:32:03.816 "req_id": 1 00:32:03.816 } 00:32:03.816 Got JSON-RPC error response 00:32:03.816 response: 00:32:03.816 { 00:32:03.816 "code": -32602, 00:32:03.816 "message": "Invalid parameters" 00:32:03.816 } 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:03.816 Adding namespace failed - expected result. 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:03.816 test case2: host connect to nvmf target in multiple paths 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:03.816 [2024-10-17 19:39:27.551970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:03.816 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:04.074 19:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:04.331 19:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:04.331 19:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:32:04.331 19:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:04.332 19:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:32:04.332 19:39:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:32:06.857 19:39:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:06.857 [global] 00:32:06.857 thread=1 00:32:06.857 invalidate=1 00:32:06.857 rw=write 00:32:06.857 time_based=1 00:32:06.857 runtime=1 00:32:06.857 ioengine=libaio 00:32:06.857 direct=1 00:32:06.857 bs=4096 00:32:06.857 iodepth=1 00:32:06.857 norandommap=0 00:32:06.857 numjobs=1 00:32:06.857 00:32:06.857 verify_dump=1 00:32:06.857 verify_backlog=512 00:32:06.857 verify_state_save=0 00:32:06.857 do_verify=1 00:32:06.857 verify=crc32c-intel 00:32:06.857 [job0] 00:32:06.857 filename=/dev/nvme0n1 00:32:06.857 Could not set queue depth (nvme0n1) 00:32:06.857 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:06.857 fio-3.35 00:32:06.857 Starting 1 thread 00:32:07.794 00:32:07.794 job0: (groupid=0, jobs=1): err= 0: pid=2321306: Thu Oct 17 19:39:31 2024 00:32:07.794 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:32:07.794 slat (nsec): min=9882, max=24509, avg=22126.26, stdev=2739.01 00:32:07.794 clat (usec): min=40785, max=41109, avg=40956.57, stdev=75.17 00:32:07.794 lat (usec): min=40795, max=41132, avg=40978.70, stdev=76.54 00:32:07.794 clat percentiles (usec): 00:32:07.794 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:32:07.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:07.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:07.794 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:07.794 | 99.99th=[41157] 00:32:07.794 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:32:07.794 slat (nsec): min=8921, max=38173, avg=10184.96, stdev=1969.21 00:32:07.794 clat (usec): min=128, max=310, avg=139.57, stdev=11.49 00:32:07.794 lat (usec): min=138, max=348, avg=149.75, stdev=13.09 00:32:07.794 clat percentiles (usec): 00:32:07.794 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:32:07.794 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:32:07.794 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 145], 95.00th=[ 147], 00:32:07.794 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 310], 99.95th=[ 310], 00:32:07.794 | 99.99th=[ 310] 00:32:07.794 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:07.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:07.794 lat (usec) : 250=95.33%, 500=0.37% 00:32:07.794 lat (msec) : 50=4.30% 00:32:07.794 cpu : usr=0.29%, sys=0.39%, ctx=535, majf=0, minf=1 00:32:07.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:07.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:07.795 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:07.795 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:07.795 00:32:07.795 Run status group 0 (all jobs): 00:32:07.795 READ: bw=90.2KiB/s (92.4kB/s), 90.2KiB/s-90.2KiB/s (92.4kB/s-92.4kB/s), io=92.0KiB (94.2kB), run=1020-1020msec 00:32:07.795 WRITE: bw=2008KiB/s (2056kB/s), 2008KiB/s-2008KiB/s (2056kB/s-2056kB/s), io=2048KiB (2097kB), run=1020-1020msec 00:32:07.795 00:32:07.795 Disk stats (read/write): 00:32:07.795 nvme0n1: ios=70/512, merge=0/0, ticks=835/70, in_queue=905, util=90.98% 00:32:07.795 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:08.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.053 rmmod nvme_tcp 00:32:08.053 rmmod nvme_fabrics 00:32:08.053 rmmod nvme_keyring 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 2320668 ']' 00:32:08.053 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 2320668 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2320668 ']' 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2320668 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2320668 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2320668' 00:32:08.054 killing process with pid 2320668 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2320668 00:32:08.054 19:39:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2320668 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.313 19:39:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.850 00:32:10.850 real 0m13.756s 00:32:10.850 user 0m24.508s 00:32:10.850 sys 0m6.146s 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:10.850 ************************************ 00:32:10.850 END TEST nvmf_nmic 00:32:10.850 ************************************ 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:10.850 ************************************ 00:32:10.850 START TEST nvmf_fio_target 00:32:10.850 ************************************ 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:10.850 * Looking for test storage... 00:32:10.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:10.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.850 --rc genhtml_branch_coverage=1 00:32:10.850 --rc genhtml_function_coverage=1 00:32:10.850 --rc genhtml_legend=1 00:32:10.850 --rc geninfo_all_blocks=1 00:32:10.850 --rc geninfo_unexecuted_blocks=1 00:32:10.850 00:32:10.850 ' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:10.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.850 --rc genhtml_branch_coverage=1 00:32:10.850 --rc genhtml_function_coverage=1 00:32:10.850 --rc genhtml_legend=1 00:32:10.850 --rc geninfo_all_blocks=1 00:32:10.850 --rc geninfo_unexecuted_blocks=1 00:32:10.850 00:32:10.850 ' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:10.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.850 --rc genhtml_branch_coverage=1 00:32:10.850 --rc genhtml_function_coverage=1 00:32:10.850 --rc genhtml_legend=1 00:32:10.850 --rc geninfo_all_blocks=1 00:32:10.850 --rc geninfo_unexecuted_blocks=1 00:32:10.850 00:32:10.850 ' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:10.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.850 --rc genhtml_branch_coverage=1 00:32:10.850 --rc genhtml_function_coverage=1 00:32:10.850 --rc genhtml_legend=1 00:32:10.850 --rc geninfo_all_blocks=1 00:32:10.850 --rc geninfo_unexecuted_blocks=1 00:32:10.850 00:32:10.850 ' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.850 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.851 19:39:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.423 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:17.424 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:17.424 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:17.424 19:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:17.424 Found net devices under 0000:86:00.0: cvl_0_0 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:17.424 Found net devices under 0000:86:00.1: cvl_0_1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.450 ms 00:32:17.424 00:32:17.424 --- 10.0.0.2 ping statistics --- 00:32:17.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.424 rtt min/avg/max/mdev = 0.450/0.450/0.450/0.000 ms 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:17.424 00:32:17.424 --- 10.0.0.1 ping statistics --- 00:32:17.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.424 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.424 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=2325041 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 2325041 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2325041 ']' 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.425 [2024-10-17 19:39:40.347368] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.425 [2024-10-17 19:39:40.348306] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:32:17.425 [2024-10-17 19:39:40.348340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.425 [2024-10-17 19:39:40.426883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:17.425 [2024-10-17 19:39:40.469190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.425 [2024-10-17 19:39:40.469228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.425 [2024-10-17 19:39:40.469235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.425 [2024-10-17 19:39:40.469241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.425 [2024-10-17 19:39:40.469246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.425 [2024-10-17 19:39:40.470743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.425 [2024-10-17 19:39:40.470776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:17.425 [2024-10-17 19:39:40.470883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.425 [2024-10-17 19:39:40.470884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:17.425 [2024-10-17 19:39:40.538518] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.425 [2024-10-17 19:39:40.538939] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.425 [2024-10-17 19:39:40.539282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:17.425 [2024-10-17 19:39:40.539522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:17.425 [2024-10-17 19:39:40.539598] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:17.425 [2024-10-17 19:39:40.775716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.425 19:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:17.425 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:17.425 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:17.684 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:17.684 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:17.684 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:17.684 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:17.944 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:17.944 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:18.203 19:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:18.462 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:18.462 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:18.462 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:18.462 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:18.721 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:18.721 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:18.980 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:19.239 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:19.239 19:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:19.239 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:19.239 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:19.496 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:19.755 [2024-10-17 19:39:43.355597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:19.755 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:20.013 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:20.013 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:32:20.272 19:39:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:32:22.300 19:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:32:22.300 19:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:32:22.300 19:39:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:32:22.300 19:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:32:22.300 19:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:32:22.300 19:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:32:22.300 19:39:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:22.300 [global] 00:32:22.300 thread=1 00:32:22.300 invalidate=1 00:32:22.300 rw=write 00:32:22.300 time_based=1 00:32:22.300 runtime=1 00:32:22.300 ioengine=libaio 00:32:22.300 direct=1 00:32:22.300 bs=4096 00:32:22.300 iodepth=1 00:32:22.300 norandommap=0 00:32:22.300 numjobs=1 00:32:22.300 00:32:22.300 verify_dump=1 00:32:22.300 verify_backlog=512 00:32:22.300 verify_state_save=0 00:32:22.300 do_verify=1 00:32:22.300 verify=crc32c-intel 00:32:22.300 [job0] 00:32:22.300 filename=/dev/nvme0n1 00:32:22.300 [job1] 00:32:22.300 filename=/dev/nvme0n2 00:32:22.300 [job2] 00:32:22.300 filename=/dev/nvme0n3 00:32:22.300 [job3] 00:32:22.300 filename=/dev/nvme0n4 00:32:22.572 Could not set queue depth (nvme0n1) 00:32:22.572 Could not set queue depth (nvme0n2) 00:32:22.572 Could not set queue depth (nvme0n3) 00:32:22.572 Could not set queue depth (nvme0n4) 00:32:22.830 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:22.830 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:22.830 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:22.830 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:22.830 fio-3.35 00:32:22.830 Starting 4 threads 00:32:24.205 00:32:24.205 job0: (groupid=0, jobs=1): err= 0: pid=2326161: Thu Oct 17 19:39:47 2024 00:32:24.205 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:32:24.205 slat (nsec): min=10084, max=23499, avg=22433.59, stdev=2784.71 00:32:24.205 clat (usec): min=40881, max=41350, avg=40989.29, stdev=91.96 00:32:24.205 lat (usec): min=40904, max=41360, avg=41011.72, stdev=89.43 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:24.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:24.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:24.205 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.205 | 99.99th=[41157] 00:32:24.205 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:32:24.205 slat (nsec): min=9870, max=45574, avg=13555.48, stdev=3394.92 00:32:24.205 clat (usec): min=118, max=396, avg=189.90, stdev=30.90 00:32:24.205 lat (usec): min=130, max=442, avg=203.46, stdev=30.92 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 172], 00:32:24.205 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:32:24.205 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 239], 95.00th=[ 255], 00:32:24.205 | 99.00th=[ 265], 99.50th=[ 297], 99.90th=[ 396], 99.95th=[ 396], 00:32:24.205 | 99.99th=[ 396] 00:32:24.205 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:32:24.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:24.205 lat (usec) : 250=90.45%, 500=5.43% 00:32:24.205 lat (msec) : 50=4.12% 00:32:24.205 cpu : usr=0.60%, sys=0.69%, ctx=536, majf=0, minf=1 00:32:24.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.205 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.205 job1: (groupid=0, jobs=1): err= 0: pid=2326162: Thu Oct 17 19:39:47 2024 00:32:24.205 read: IOPS=26, BW=105KiB/s (107kB/s)(108KiB/1033msec) 00:32:24.205 slat (nsec): min=8090, max=22465, avg=19171.11, stdev=5168.16 00:32:24.205 clat (usec): min=229, max=41250, avg=34919.93, stdev=14739.84 00:32:24.205 lat (usec): min=246, max=41258, avg=34939.10, stdev=14740.12 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[ 229], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[40633], 00:32:24.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:24.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:24.205 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.205 | 99.99th=[41157] 00:32:24.205 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:32:24.205 slat (nsec): min=9309, max=41135, avg=10465.49, stdev=2025.82 00:32:24.205 clat (usec): min=128, max=298, avg=162.38, stdev=13.87 00:32:24.205 lat (usec): min=139, max=333, avg=172.84, stdev=14.91 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:32:24.205 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:32:24.205 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 180], 00:32:24.205 | 99.00th=[ 221], 99.50th=[ 265], 99.90th=[ 297], 99.95th=[ 297], 00:32:24.205 | 99.99th=[ 297] 00:32:24.205 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:32:24.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:24.205 lat (usec) : 250=94.99%, 500=0.74% 00:32:24.205 lat (msec) : 50=4.27% 00:32:24.205 cpu : usr=0.29%, sys=0.48%, ctx=539, majf=0, minf=1 00:32:24.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.205 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.205 job2: (groupid=0, jobs=1): err= 0: pid=2326165: Thu Oct 17 19:39:47 2024 00:32:24.205 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:32:24.205 slat (nsec): min=12058, max=23293, avg=22123.77, stdev=2281.41 00:32:24.205 clat (usec): min=40583, max=41046, avg=40951.96, stdev=90.50 00:32:24.205 lat (usec): min=40595, max=41067, avg=40974.08, stdev=92.50 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:24.205 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:24.205 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:24.205 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.205 | 99.99th=[41157] 00:32:24.205 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:32:24.205 slat (nsec): min=12811, max=44735, avg=15001.40, stdev=2278.82 00:32:24.205 clat (usec): min=143, max=359, avg=185.81, stdev=22.14 00:32:24.205 lat (usec): min=159, max=374, avg=200.81, stdev=22.26 00:32:24.205 clat percentiles (usec): 00:32:24.205 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 172], 00:32:24.205 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:32:24.205 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 237], 00:32:24.205 | 99.00th=[ 262], 99.50th=[ 289], 99.90th=[ 359], 99.95th=[ 359], 00:32:24.205 | 99.99th=[ 359] 00:32:24.205 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:32:24.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:24.205 lat (usec) : 250=94.38%, 500=1.50% 00:32:24.205 lat (msec) : 50=4.12% 00:32:24.205 cpu : usr=0.80%, sys=0.80%, ctx=536, majf=0, minf=1 00:32:24.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.206 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.206 job3: (groupid=0, jobs=1): err= 0: pid=2326167: Thu Oct 17 19:39:47 2024 00:32:24.206 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:32:24.206 slat (nsec): min=10554, max=27066, avg=23394.14, stdev=3307.73 00:32:24.206 clat (usec): min=40894, max=41280, avg=40978.16, stdev=76.78 00:32:24.206 lat (usec): min=40921, max=41290, avg=41001.55, stdev=74.10 00:32:24.206 clat percentiles (usec): 00:32:24.206 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:24.206 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:24.206 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:24.206 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:24.206 | 99.99th=[41157] 00:32:24.206 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:32:24.206 slat (nsec): min=10885, max=40593, avg=12708.10, stdev=2310.36 00:32:24.206 clat (usec): min=133, max=455, avg=195.18, stdev=31.48 00:32:24.206 lat (usec): min=145, max=496, avg=207.88, stdev=31.88 00:32:24.206 clat percentiles (usec): 00:32:24.206 | 1.00th=[ 141], 5.00th=[ 163], 10.00th=[ 172], 20.00th=[ 176], 00:32:24.206 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:32:24.206 | 70.00th=[ 196], 80.00th=[ 217], 90.00th=[ 241], 95.00th=[ 255], 00:32:24.206 | 99.00th=[ 273], 99.50th=[ 326], 99.90th=[ 457], 99.95th=[ 457], 00:32:24.206 | 99.99th=[ 457] 00:32:24.206 bw ( KiB/s): min= 4096, max= 4096, per=51.65%, avg=4096.00, stdev= 0.00, samples=1 00:32:24.206 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:24.206 lat (usec) : 250=89.89%, 500=5.99% 00:32:24.206 lat (msec) : 50=4.12% 00:32:24.206 cpu : usr=0.30%, sys=1.09%, ctx=535, majf=0, minf=1 00:32:24.206 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:24.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.206 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.206 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:24.206 00:32:24.206 Run status group 0 (all jobs): 00:32:24.206 READ: bw=360KiB/s (369kB/s), 87.0KiB/s-105KiB/s (89.1kB/s-107kB/s), io=372KiB (381kB), run=1007-1033msec 00:32:24.206 WRITE: bw=7930KiB/s (8121kB/s), 1983KiB/s-2034KiB/s (2030kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1033msec 00:32:24.206 00:32:24.206 Disk stats (read/write): 00:32:24.206 nvme0n1: ios=70/512, merge=0/0, ticks=1422/92, in_queue=1514, util=98.10% 00:32:24.206 nvme0n2: ios=44/512, merge=0/0, ticks=993/83, in_queue=1076, util=95.53% 00:32:24.206 nvme0n3: ios=42/512, merge=0/0, ticks=1699/95, in_queue=1794, util=100.00% 00:32:24.206 nvme0n4: ios=42/512, merge=0/0, ticks=1722/92, in_queue=1814, util=98.32% 00:32:24.206 19:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:24.206 [global] 00:32:24.206 thread=1 00:32:24.206 invalidate=1 00:32:24.206 rw=randwrite 00:32:24.206 time_based=1 00:32:24.206 runtime=1 00:32:24.206 ioengine=libaio 00:32:24.206 direct=1 00:32:24.206 bs=4096 00:32:24.206 iodepth=1 00:32:24.206 norandommap=0 00:32:24.206 numjobs=1 00:32:24.206 00:32:24.206 verify_dump=1 00:32:24.206 verify_backlog=512 00:32:24.206 verify_state_save=0 00:32:24.206 do_verify=1 00:32:24.206 verify=crc32c-intel 00:32:24.206 [job0] 00:32:24.206 filename=/dev/nvme0n1 00:32:24.206 [job1] 00:32:24.206 filename=/dev/nvme0n2 00:32:24.206 [job2] 00:32:24.206 filename=/dev/nvme0n3 00:32:24.206 [job3] 00:32:24.206 filename=/dev/nvme0n4 00:32:24.206 Could not set queue depth (nvme0n1) 00:32:24.206 Could not set queue depth (nvme0n2) 00:32:24.206 Could not set queue depth (nvme0n3) 00:32:24.206 Could not set queue depth (nvme0n4) 00:32:24.206 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.206 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.206 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.206 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:24.206 fio-3.35 00:32:24.206 Starting 4 threads 00:32:25.580 00:32:25.580 job0: (groupid=0, jobs=1): err= 0: pid=2326552: Thu Oct 17 19:39:49 2024 00:32:25.580 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:32:25.580 slat (nsec): min=9896, max=24163, avg=22120.91, stdev=2898.14 00:32:25.580 clat (usec): min=40846, max=41079, avg=40966.20, stdev=54.30 00:32:25.580 lat (usec): min=40856, max=41102, avg=40988.32, stdev=55.86 00:32:25.580 clat percentiles (usec): 00:32:25.580 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:25.580 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.580 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:25.580 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:25.580 | 99.99th=[41157] 00:32:25.580 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:32:25.580 slat (nsec): min=9672, max=40373, avg=11883.03, stdev=2649.19 00:32:25.580 clat (usec): min=148, max=382, avg=207.25, stdev=40.30 00:32:25.580 lat (usec): min=159, max=392, avg=219.13, stdev=40.62 00:32:25.580 clat percentiles (usec): 00:32:25.580 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:32:25.580 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 204], 00:32:25.580 | 70.00th=[ 219], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 281], 00:32:25.580 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 383], 99.95th=[ 383], 00:32:25.580 | 99.99th=[ 383] 00:32:25.580 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:32:25.580 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:25.580 lat (usec) : 250=76.03%, 500=19.85% 00:32:25.580 lat (msec) : 50=4.12% 00:32:25.580 cpu : usr=0.49%, sys=0.79%, ctx=534, majf=0, minf=1 00:32:25.580 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.580 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.580 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.580 job1: (groupid=0, jobs=1): err= 0: pid=2326558: Thu Oct 17 19:39:49 2024 00:32:25.581 read: IOPS=2505, BW=9.79MiB/s (10.3MB/s)(9.80MiB/1001msec) 00:32:25.581 slat (nsec): min=6272, max=29173, avg=7245.45, stdev=1046.70 00:32:25.581 clat (usec): min=167, max=501, avg=225.95, stdev=34.20 00:32:25.581 lat (usec): min=178, max=509, avg=233.20, stdev=34.20 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 184], 00:32:25.581 | 30.00th=[ 188], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 247], 00:32:25.581 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 269], 00:32:25.581 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 322], 99.95th=[ 322], 00:32:25.581 | 99.99th=[ 502] 00:32:25.581 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:25.581 slat (nsec): min=8931, max=60892, avg=9905.88, stdev=1367.60 00:32:25.581 clat (usec): min=113, max=387, avg=148.20, stdev=31.55 00:32:25.581 lat (usec): min=122, max=397, avg=158.11, stdev=31.71 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:32:25.581 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:32:25.581 | 70.00th=[ 149], 80.00th=[ 172], 90.00th=[ 186], 95.00th=[ 239], 00:32:25.581 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 265], 99.95th=[ 281], 00:32:25.581 | 99.99th=[ 388] 00:32:25.581 bw ( KiB/s): min=12288, max=12288, per=77.70%, avg=12288.00, stdev= 0.00, samples=1 00:32:25.581 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:32:25.581 lat (usec) : 250=86.25%, 500=13.73%, 750=0.02% 00:32:25.581 cpu : usr=2.40%, sys=4.60%, ctx=5069, majf=0, minf=1 00:32:25.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 issued rwts: total=2508,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.581 job2: (groupid=0, jobs=1): err= 0: pid=2326574: Thu Oct 17 19:39:49 2024 00:32:25.581 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:32:25.581 slat (nsec): min=9233, max=20561, avg=11342.27, stdev=3030.59 00:32:25.581 clat (usec): min=40529, max=41092, avg=40969.90, stdev=104.78 00:32:25.581 lat (usec): min=40539, max=41103, avg=40981.24, stdev=104.84 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:25.581 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.581 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:25.581 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:25.581 | 99.99th=[41157] 00:32:25.581 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:32:25.581 slat (nsec): min=9551, max=39544, avg=11377.84, stdev=1647.75 00:32:25.581 clat (usec): min=143, max=427, avg=200.69, stdev=34.10 00:32:25.581 lat (usec): min=154, max=438, avg=212.07, stdev=34.34 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 167], 00:32:25.581 | 30.00th=[ 176], 40.00th=[ 186], 50.00th=[ 202], 60.00th=[ 215], 00:32:25.581 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 247], 00:32:25.581 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 429], 99.95th=[ 429], 00:32:25.581 | 99.99th=[ 429] 00:32:25.581 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:32:25.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:25.581 lat (usec) : 250=92.70%, 500=3.18% 00:32:25.581 lat (msec) : 50=4.12% 00:32:25.581 cpu : usr=0.30%, sys=0.59%, ctx=537, majf=0, minf=1 00:32:25.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.581 job3: (groupid=0, jobs=1): err= 0: pid=2326579: Thu Oct 17 19:39:49 2024 00:32:25.581 read: IOPS=21, BW=84.9KiB/s (87.0kB/s)(88.0KiB/1036msec) 00:32:25.581 slat (nsec): min=9871, max=26304, avg=23197.14, stdev=3167.54 00:32:25.581 clat (usec): min=40890, max=41044, avg=40968.24, stdev=47.23 00:32:25.581 lat (usec): min=40914, max=41069, avg=40991.43, stdev=47.88 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:32:25.581 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:25.581 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:25.581 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:25.581 | 99.99th=[41157] 00:32:25.581 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:32:25.581 slat (nsec): min=10906, max=43364, avg=13311.14, stdev=2477.26 00:32:25.581 clat (usec): min=160, max=341, avg=241.79, stdev=12.56 00:32:25.581 lat (usec): min=171, max=356, avg=255.10, stdev=12.88 00:32:25.581 clat percentiles (usec): 00:32:25.581 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 237], 00:32:25.581 | 30.00th=[ 239], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:32:25.581 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 251], 95.00th=[ 260], 00:32:25.581 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 343], 99.95th=[ 343], 00:32:25.581 | 99.99th=[ 343] 00:32:25.581 bw ( KiB/s): min= 4096, max= 4096, per=25.90%, avg=4096.00, stdev= 0.00, samples=1 00:32:25.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:25.581 lat (usec) : 250=85.39%, 500=10.49% 00:32:25.581 lat (msec) : 50=4.12% 00:32:25.581 cpu : usr=0.77%, sys=0.68%, ctx=535, majf=0, minf=1 00:32:25.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.581 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:25.581 00:32:25.581 Run status group 0 (all jobs): 00:32:25.581 READ: bw=9938KiB/s (10.2MB/s), 84.9KiB/s-9.79MiB/s (87.0kB/s-10.3MB/s), io=10.1MiB (10.5MB), run=1001-1036msec 00:32:25.581 WRITE: bw=15.4MiB/s (16.2MB/s), 1977KiB/s-9.99MiB/s (2024kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1036msec 00:32:25.581 00:32:25.581 Disk stats (read/write): 00:32:25.581 nvme0n1: ios=67/512, merge=0/0, ticks=711/103, in_queue=814, util=86.37% 00:32:25.581 nvme0n2: ios=2048/2180, merge=0/0, ticks=474/313, in_queue=787, util=86.48% 00:32:25.581 nvme0n3: ios=43/512, merge=0/0, ticks=1722/101, in_queue=1823, util=97.81% 00:32:25.581 nvme0n4: ios=41/512, merge=0/0, ticks=1640/115, in_queue=1755, util=97.89% 00:32:25.581 19:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:25.581 [global] 00:32:25.581 thread=1 00:32:25.581 invalidate=1 00:32:25.581 rw=write 00:32:25.581 time_based=1 00:32:25.581 runtime=1 00:32:25.581 ioengine=libaio 00:32:25.581 direct=1 00:32:25.581 bs=4096 00:32:25.581 iodepth=128 00:32:25.581 norandommap=0 00:32:25.581 numjobs=1 00:32:25.581 00:32:25.581 verify_dump=1 00:32:25.581 verify_backlog=512 00:32:25.581 verify_state_save=0 00:32:25.581 do_verify=1 00:32:25.581 verify=crc32c-intel 00:32:25.581 [job0] 00:32:25.581 filename=/dev/nvme0n1 00:32:25.581 [job1] 00:32:25.581 filename=/dev/nvme0n2 00:32:25.581 [job2] 00:32:25.581 filename=/dev/nvme0n3 00:32:25.581 [job3] 00:32:25.581 filename=/dev/nvme0n4 00:32:25.581 Could not set queue depth (nvme0n1) 00:32:25.581 Could not set queue depth (nvme0n2) 00:32:25.581 Could not set queue depth (nvme0n3) 00:32:25.581 Could not set queue depth (nvme0n4) 00:32:25.840 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:25.840 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:25.840 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:25.840 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:25.840 fio-3.35 00:32:25.840 Starting 4 threads 00:32:27.217 00:32:27.217 job0: (groupid=0, jobs=1): err= 0: pid=2326977: Thu Oct 17 19:39:50 2024 00:32:27.217 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:32:27.217 slat (nsec): min=1336, max=10217k, avg=102581.49, stdev=719536.79 00:32:27.217 clat (usec): min=2977, max=28780, avg=12826.45, stdev=4253.41 00:32:27.217 lat (usec): min=2988, max=28784, avg=12929.03, stdev=4295.16 00:32:27.217 clat percentiles (usec): 00:32:27.217 | 1.00th=[ 3687], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[ 9241], 00:32:27.217 | 30.00th=[ 9765], 40.00th=[11207], 50.00th=[12256], 60.00th=[13042], 00:32:27.217 | 70.00th=[14746], 80.00th=[15664], 90.00th=[18220], 95.00th=[21627], 00:32:27.217 | 99.00th=[26870], 99.50th=[27395], 99.90th=[28181], 99.95th=[28705], 00:32:27.217 | 99.99th=[28705] 00:32:27.217 write: IOPS=4133, BW=16.1MiB/s (16.9MB/s)(16.3MiB/1008msec); 0 zone resets 00:32:27.217 slat (usec): min=2, max=13129, avg=131.30, stdev=792.36 00:32:27.217 clat (usec): min=1259, max=108092, avg=18057.35, stdev=14727.20 00:32:27.217 lat (usec): min=1274, max=108102, avg=18188.65, stdev=14805.29 00:32:27.217 clat percentiles (msec): 00:32:27.217 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:32:27.217 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:32:27.217 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 29], 95.00th=[ 37], 00:32:27.217 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 109], 99.95th=[ 109], 00:32:27.217 | 99.99th=[ 109] 00:32:27.217 bw ( KiB/s): min=16384, max=16384, per=22.99%, avg=16384.00, stdev= 0.00, samples=2 00:32:27.217 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:32:27.217 lat (msec) : 2=0.23%, 4=0.59%, 10=27.92%, 20=51.11%, 50=18.24% 00:32:27.217 lat (msec) : 100=1.36%, 250=0.56% 00:32:27.217 cpu : usr=3.28%, sys=5.76%, ctx=343, majf=0, minf=1 00:32:27.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:32:27.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.217 issued rwts: total=4096,4167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.217 job1: (groupid=0, jobs=1): err= 0: pid=2326991: Thu Oct 17 19:39:50 2024 00:32:27.217 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:32:27.217 slat (nsec): min=1070, max=28123k, avg=72136.32, stdev=602138.39 00:32:27.217 clat (usec): min=3585, max=56833, avg=10671.25, stdev=5258.45 00:32:27.217 lat (usec): min=3591, max=58707, avg=10743.38, stdev=5278.62 00:32:27.217 clat percentiles (usec): 00:32:27.217 | 1.00th=[ 4047], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8029], 00:32:27.217 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:32:27.217 | 70.00th=[10683], 80.00th=[11994], 90.00th=[14353], 95.00th=[18220], 00:32:27.217 | 99.00th=[36963], 99.50th=[38011], 99.90th=[56886], 99.95th=[56886], 00:32:27.217 | 99.99th=[56886] 00:32:27.217 write: IOPS=6314, BW=24.7MiB/s (25.9MB/s)(24.9MiB/1008msec); 0 zone resets 00:32:27.217 slat (nsec): min=1979, max=20134k, avg=70651.03, stdev=570467.07 00:32:27.217 clat (usec): min=476, max=30198, avg=9767.07, stdev=3300.33 00:32:27.217 lat (usec): min=503, max=30214, avg=9837.72, stdev=3333.51 00:32:27.217 clat percentiles (usec): 00:32:27.217 | 1.00th=[ 5014], 5.00th=[ 5735], 10.00th=[ 7308], 20.00th=[ 8029], 00:32:27.217 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9503], 00:32:27.217 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[12780], 95.00th=[16712], 00:32:27.217 | 99.00th=[22676], 99.50th=[27132], 99.90th=[27132], 99.95th=[27132], 00:32:27.217 | 99.99th=[30278] 00:32:27.217 bw ( KiB/s): min=23072, max=26832, per=35.01%, avg=24952.00, stdev=2658.72, samples=2 00:32:27.217 iops : min= 5768, max= 6708, avg=6238.00, stdev=664.68, samples=2 00:32:27.217 lat (usec) : 500=0.01%, 750=0.01% 00:32:27.217 lat (msec) : 4=0.57%, 10=64.82%, 20=30.98%, 50=3.48%, 100=0.14% 00:32:27.217 cpu : usr=3.57%, sys=6.85%, ctx=438, majf=0, minf=1 00:32:27.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:27.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.217 issued rwts: total=6144,6365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.217 job2: (groupid=0, jobs=1): err= 0: pid=2327009: Thu Oct 17 19:39:50 2024 00:32:27.217 read: IOPS=2526, BW=9.87MiB/s (10.3MB/s)(9.95MiB/1008msec) 00:32:27.217 slat (nsec): min=1711, max=19225k, avg=166289.99, stdev=1087734.24 00:32:27.217 clat (usec): min=4329, max=89268, avg=21800.02, stdev=17067.75 00:32:27.217 lat (usec): min=4332, max=89274, avg=21966.31, stdev=17157.73 00:32:27.217 clat percentiles (usec): 00:32:27.217 | 1.00th=[ 8848], 5.00th=[10159], 10.00th=[10290], 20.00th=[10552], 00:32:27.217 | 30.00th=[11207], 40.00th=[12125], 50.00th=[13829], 60.00th=[18744], 00:32:27.217 | 70.00th=[20841], 80.00th=[31327], 90.00th=[47973], 95.00th=[59507], 00:32:27.217 | 99.00th=[78119], 99.50th=[82314], 99.90th=[89654], 99.95th=[89654], 00:32:27.217 | 99.99th=[89654] 00:32:27.217 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:32:27.217 slat (usec): min=2, max=19327, avg=219.62, stdev=1115.64 00:32:27.217 clat (msec): min=3, max=122, avg=28.12, stdev=28.36 00:32:27.218 lat (msec): min=3, max=122, avg=28.33, stdev=28.55 00:32:27.218 clat percentiles (msec): 00:32:27.218 | 1.00th=[ 5], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:32:27.218 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 22], 00:32:27.218 | 70.00th=[ 25], 80.00th=[ 42], 90.00th=[ 83], 95.00th=[ 102], 00:32:27.218 | 99.00th=[ 115], 99.50th=[ 122], 99.90th=[ 123], 99.95th=[ 123], 00:32:27.218 | 99.99th=[ 123] 00:32:27.218 bw ( KiB/s): min= 6784, max=13696, per=14.37%, avg=10240.00, stdev=4887.52, samples=2 00:32:27.218 iops : min= 1696, max= 3424, avg=2560.00, stdev=1221.88, samples=2 00:32:27.218 lat (msec) : 4=0.14%, 10=12.24%, 20=47.91%, 50=26.53%, 100=10.53% 00:32:27.218 lat (msec) : 250=2.64% 00:32:27.218 cpu : usr=1.59%, sys=3.57%, ctx=295, majf=0, minf=1 00:32:27.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:27.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.218 issued rwts: total=2547,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.218 job3: (groupid=0, jobs=1): err= 0: pid=2327015: Thu Oct 17 19:39:50 2024 00:32:27.218 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:32:27.218 slat (nsec): min=1438, max=12104k, avg=96944.41, stdev=605028.02 00:32:27.218 clat (usec): min=1282, max=65715, avg=13326.65, stdev=8558.11 00:32:27.218 lat (usec): min=1287, max=66743, avg=13423.59, stdev=8601.63 00:32:27.218 clat percentiles (usec): 00:32:27.218 | 1.00th=[ 2024], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[ 9765], 00:32:27.218 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:32:27.218 | 70.00th=[12780], 80.00th=[13566], 90.00th=[16712], 95.00th=[34341], 00:32:27.218 | 99.00th=[56361], 99.50th=[59507], 99.90th=[65799], 99.95th=[65799], 00:32:27.218 | 99.99th=[65799] 00:32:27.218 write: IOPS=4858, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1002msec); 0 zone resets 00:32:27.218 slat (usec): min=2, max=11382, avg=104.62, stdev=591.54 00:32:27.218 clat (usec): min=345, max=58493, avg=13474.37, stdev=6893.18 00:32:27.218 lat (usec): min=3332, max=58497, avg=13578.99, stdev=6941.51 00:32:27.218 clat percentiles (usec): 00:32:27.218 | 1.00th=[ 5145], 5.00th=[ 7832], 10.00th=[ 9110], 20.00th=[ 9765], 00:32:27.218 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:32:27.218 | 70.00th=[13042], 80.00th=[14222], 90.00th=[22676], 95.00th=[24249], 00:32:27.218 | 99.00th=[50070], 99.50th=[56361], 99.90th=[57934], 99.95th=[57934], 00:32:27.218 | 99.99th=[58459] 00:32:27.218 bw ( KiB/s): min=16384, max=21536, per=26.60%, avg=18960.00, stdev=3643.01, samples=2 00:32:27.218 iops : min= 4096, max= 5384, avg=4740.00, stdev=910.75, samples=2 00:32:27.218 lat (usec) : 500=0.01% 00:32:27.218 lat (msec) : 2=0.43%, 4=1.53%, 10=20.54%, 20=67.70%, 50=8.29% 00:32:27.218 lat (msec) : 100=1.50% 00:32:27.218 cpu : usr=3.20%, sys=5.79%, ctx=565, majf=0, minf=1 00:32:27.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:27.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:27.218 issued rwts: total=4608,4868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:27.218 00:32:27.218 Run status group 0 (all jobs): 00:32:27.218 READ: bw=67.4MiB/s (70.7MB/s), 9.87MiB/s-23.8MiB/s (10.3MB/s-25.0MB/s), io=67.9MiB (71.2MB), run=1002-1008msec 00:32:27.218 WRITE: bw=69.6MiB/s (73.0MB/s), 9.92MiB/s-24.7MiB/s (10.4MB/s-25.9MB/s), io=70.2MiB (73.6MB), run=1002-1008msec 00:32:27.218 00:32:27.218 Disk stats (read/write): 00:32:27.218 nvme0n1: ios=3094/3582, merge=0/0, ticks=34602/64276, in_queue=98878, util=97.49% 00:32:27.218 nvme0n2: ios=5477/5632, merge=0/0, ticks=39576/37506, in_queue=77082, util=96.03% 00:32:27.218 nvme0n3: ios=2298/2560, merge=0/0, ticks=15468/22766, in_queue=38234, util=94.78% 00:32:27.218 nvme0n4: ios=3642/4007, merge=0/0, ticks=28869/36373, in_queue=65242, util=97.78% 00:32:27.218 19:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:27.218 [global] 00:32:27.218 thread=1 00:32:27.218 invalidate=1 00:32:27.218 rw=randwrite 00:32:27.218 time_based=1 00:32:27.218 runtime=1 00:32:27.218 ioengine=libaio 00:32:27.218 direct=1 00:32:27.218 bs=4096 00:32:27.218 iodepth=128 00:32:27.218 norandommap=0 00:32:27.218 numjobs=1 00:32:27.218 00:32:27.218 verify_dump=1 00:32:27.218 verify_backlog=512 00:32:27.218 verify_state_save=0 00:32:27.218 do_verify=1 00:32:27.218 verify=crc32c-intel 00:32:27.218 [job0] 00:32:27.218 filename=/dev/nvme0n1 00:32:27.218 [job1] 00:32:27.218 filename=/dev/nvme0n2 00:32:27.218 [job2] 00:32:27.218 filename=/dev/nvme0n3 00:32:27.218 [job3] 00:32:27.218 filename=/dev/nvme0n4 00:32:27.218 Could not set queue depth (nvme0n1) 00:32:27.218 Could not set queue depth (nvme0n2) 00:32:27.218 Could not set queue depth (nvme0n3) 00:32:27.218 Could not set queue depth (nvme0n4) 00:32:27.477 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:27.477 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:27.477 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:27.477 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:27.477 fio-3.35 00:32:27.477 Starting 4 threads 00:32:28.878 00:32:28.878 job0: (groupid=0, jobs=1): err= 0: pid=2327402: Thu Oct 17 19:39:52 2024 00:32:28.878 read: IOPS=4064, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:32:28.878 slat (nsec): min=1676, max=12264k, avg=99136.47, stdev=741716.95 00:32:28.878 clat (usec): min=6555, max=43369, avg=13533.88, stdev=4782.81 00:32:28.878 lat (usec): min=6562, max=43890, avg=13633.02, stdev=4828.33 00:32:28.878 clat percentiles (usec): 00:32:28.878 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 8586], 20.00th=[ 9634], 00:32:28.878 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12780], 60.00th=[14353], 00:32:28.878 | 70.00th=[15664], 80.00th=[16319], 90.00th=[19006], 95.00th=[21890], 00:32:28.878 | 99.00th=[32113], 99.50th=[39584], 99.90th=[43254], 99.95th=[43254], 00:32:28.878 | 99.99th=[43254] 00:32:28.878 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:32:28.878 slat (usec): min=2, max=11978, avg=122.29, stdev=797.88 00:32:28.878 clat (usec): min=1580, max=52584, avg=15679.89, stdev=10154.92 00:32:28.878 lat (usec): min=1596, max=52595, avg=15802.18, stdev=10227.47 00:32:28.878 clat percentiles (usec): 00:32:28.878 | 1.00th=[ 4228], 5.00th=[ 6849], 10.00th=[ 7832], 20.00th=[ 9372], 00:32:28.878 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11863], 60.00th=[12780], 00:32:28.878 | 70.00th=[14877], 80.00th=[20841], 90.00th=[32113], 95.00th=[40633], 00:32:28.878 | 99.00th=[47973], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:32:28.878 | 99.99th=[52691] 00:32:28.878 bw ( KiB/s): min=16416, max=19472, per=24.86%, avg=17944.00, stdev=2160.92, samples=2 00:32:28.878 iops : min= 4104, max= 4868, avg=4486.00, stdev=540.23, samples=2 00:32:28.878 lat (msec) : 2=0.20%, 4=0.28%, 10=23.22%, 20=62.00%, 50=13.97% 00:32:28.878 lat (msec) : 100=0.33% 00:32:28.878 cpu : usr=3.67%, sys=6.15%, ctx=269, majf=0, minf=2 00:32:28.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:28.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:28.878 issued rwts: total=4101,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:28.878 job1: (groupid=0, jobs=1): err= 0: pid=2327413: Thu Oct 17 19:39:52 2024 00:32:28.878 read: IOPS=5956, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1005msec) 00:32:28.878 slat (nsec): min=1090, max=16877k, avg=79176.13, stdev=490815.21 00:32:28.878 clat (usec): min=1588, max=38967, avg=10466.82, stdev=4572.03 00:32:28.878 lat (usec): min=1595, max=38973, avg=10545.99, stdev=4589.23 00:32:28.878 clat percentiles (usec): 00:32:28.878 | 1.00th=[ 2638], 5.00th=[ 3851], 10.00th=[ 6456], 20.00th=[ 8225], 00:32:28.878 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10290], 00:32:28.878 | 70.00th=[10683], 80.00th=[11600], 90.00th=[15533], 95.00th=[20841], 00:32:28.878 | 99.00th=[27657], 99.50th=[29754], 99.90th=[38536], 99.95th=[39060], 00:32:28.878 | 99.99th=[39060] 00:32:28.878 write: IOPS=6225, BW=24.3MiB/s (25.5MB/s)(24.4MiB/1005msec); 0 zone resets 00:32:28.878 slat (nsec): min=1810, max=11779k, avg=74496.92, stdev=459429.71 00:32:28.878 clat (usec): min=486, max=46199, avg=10352.72, stdev=4239.19 00:32:28.878 lat (usec): min=491, max=46204, avg=10427.22, stdev=4256.94 00:32:28.878 clat percentiles (usec): 00:32:28.878 | 1.00th=[ 1369], 5.00th=[ 5014], 10.00th=[ 7439], 20.00th=[ 9110], 00:32:28.879 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:32:28.879 | 70.00th=[10159], 80.00th=[11338], 90.00th=[13173], 95.00th=[14615], 00:32:28.879 | 99.00th=[26346], 99.50th=[39060], 99.90th=[45876], 99.95th=[46400], 00:32:28.879 | 99.99th=[46400] 00:32:28.879 bw ( KiB/s): min=24792, max=25264, per=34.67%, avg=25028.00, stdev=333.75, samples=2 00:32:28.879 iops : min= 6198, max= 6316, avg=6257.00, stdev=83.44, samples=2 00:32:28.879 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.11% 00:32:28.879 lat (msec) : 2=1.19%, 4=3.48%, 10=51.21%, 20=39.79%, 50=4.16% 00:32:28.879 cpu : usr=2.39%, sys=4.88%, ctx=772, majf=0, minf=1 00:32:28.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:32:28.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:28.879 issued rwts: total=5986,6257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:28.879 job2: (groupid=0, jobs=1): err= 0: pid=2327427: Thu Oct 17 19:39:52 2024 00:32:28.879 read: IOPS=1480, BW=5920KiB/s (6062kB/s)(5944KiB/1004msec) 00:32:28.879 slat (nsec): min=1769, max=18753k, avg=242499.66, stdev=1410661.90 00:32:28.879 clat (msec): min=2, max=136, avg=30.97, stdev=23.37 00:32:28.879 lat (msec): min=3, max=138, avg=31.21, stdev=23.54 00:32:28.879 clat percentiles (msec): 00:32:28.879 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:32:28.879 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 22], 00:32:28.879 | 70.00th=[ 42], 80.00th=[ 53], 90.00th=[ 62], 95.00th=[ 70], 00:32:28.879 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:32:28.879 | 99.99th=[ 138] 00:32:28.879 write: IOPS=1529, BW=6120KiB/s (6266kB/s)(6144KiB/1004msec); 0 zone resets 00:32:28.879 slat (usec): min=3, max=22095, avg=407.26, stdev=1943.30 00:32:28.879 clat (msec): min=8, max=166, avg=52.67, stdev=52.34 00:32:28.879 lat (msec): min=8, max=166, avg=53.08, stdev=52.72 00:32:28.879 clat percentiles (msec): 00:32:28.879 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:32:28.879 | 30.00th=[ 11], 40.00th=[ 15], 50.00th=[ 35], 60.00th=[ 41], 00:32:28.879 | 70.00th=[ 59], 80.00th=[ 129], 90.00th=[ 144], 95.00th=[ 155], 00:32:28.879 | 99.00th=[ 165], 99.50th=[ 167], 99.90th=[ 167], 99.95th=[ 167], 00:32:28.879 | 99.99th=[ 167] 00:32:28.879 bw ( KiB/s): min= 4096, max= 8192, per=8.51%, avg=6144.00, stdev=2896.31, samples=2 00:32:28.879 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:32:28.879 lat (msec) : 4=0.36%, 10=16.64%, 20=32.36%, 50=22.77%, 100=15.19% 00:32:28.879 lat (msec) : 250=12.67% 00:32:28.879 cpu : usr=1.89%, sys=2.39%, ctx=161, majf=0, minf=1 00:32:28.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:32:28.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:28.879 issued rwts: total=1486,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:28.879 job3: (groupid=0, jobs=1): err= 0: pid=2327432: Thu Oct 17 19:39:52 2024 00:32:28.879 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:32:28.879 slat (nsec): min=1267, max=5015.3k, avg=83163.08, stdev=405518.84 00:32:28.879 clat (usec): min=6157, max=18915, avg=10590.26, stdev=1650.29 00:32:28.879 lat (usec): min=6163, max=18922, avg=10673.42, stdev=1682.36 00:32:28.879 clat percentiles (usec): 00:32:28.879 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 8979], 00:32:28.879 | 30.00th=[ 9896], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:32:28.879 | 70.00th=[11207], 80.00th=[11469], 90.00th=[12911], 95.00th=[13435], 00:32:28.879 | 99.00th=[14746], 99.50th=[15270], 99.90th=[15926], 99.95th=[15926], 00:32:28.879 | 99.99th=[19006] 00:32:28.879 write: IOPS=5766, BW=22.5MiB/s (23.6MB/s)(22.7MiB/1007msec); 0 zone resets 00:32:28.879 slat (usec): min=2, max=16357, avg=84.45, stdev=393.85 00:32:28.879 clat (usec): min=5968, max=37071, avg=11555.19, stdev=3823.91 00:32:28.879 lat (usec): min=5976, max=37111, avg=11639.65, stdev=3846.67 00:32:28.879 clat percentiles (usec): 00:32:28.879 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:32:28.879 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11207], 60.00th=[11338], 00:32:28.879 | 70.00th=[11338], 80.00th=[11469], 90.00th=[13435], 95.00th=[17171], 00:32:28.879 | 99.00th=[32637], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:32:28.879 | 99.99th=[36963] 00:32:28.879 bw ( KiB/s): min=22600, max=22840, per=31.48%, avg=22720.00, stdev=169.71, samples=2 00:32:28.879 iops : min= 5650, max= 5710, avg=5680.00, stdev=42.43, samples=2 00:32:28.879 lat (msec) : 10=27.19%, 20=70.92%, 50=1.90% 00:32:28.879 cpu : usr=2.78%, sys=6.46%, ctx=811, majf=0, minf=2 00:32:28.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:32:28.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:28.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:28.879 issued rwts: total=5632,5807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:28.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:28.879 00:32:28.879 Run status group 0 (all jobs): 00:32:28.879 READ: bw=66.6MiB/s (69.8MB/s), 5920KiB/s-23.3MiB/s (6062kB/s-24.4MB/s), io=67.2MiB (70.5MB), run=1004-1009msec 00:32:28.879 WRITE: bw=70.5MiB/s (73.9MB/s), 6120KiB/s-24.3MiB/s (6266kB/s-25.5MB/s), io=71.1MiB (74.6MB), run=1004-1009msec 00:32:28.879 00:32:28.879 Disk stats (read/write): 00:32:28.879 nvme0n1: ios=3957/4096, merge=0/0, ticks=51197/54149, in_queue=105346, util=98.00% 00:32:28.879 nvme0n2: ios=5120/5126, merge=0/0, ticks=27807/23203, in_queue=51010, util=98.38% 00:32:28.879 nvme0n3: ios=1062/1375, merge=0/0, ticks=12942/25351, in_queue=38293, util=98.75% 00:32:28.879 nvme0n4: ios=4662/5120, merge=0/0, ticks=24825/26223, in_queue=51048, util=89.10% 00:32:28.879 19:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:28.879 19:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2327532 00:32:28.879 19:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:28.879 19:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:28.879 [global] 00:32:28.879 thread=1 00:32:28.879 invalidate=1 00:32:28.879 rw=read 00:32:28.879 time_based=1 00:32:28.879 runtime=10 00:32:28.879 ioengine=libaio 00:32:28.879 direct=1 00:32:28.879 bs=4096 00:32:28.879 iodepth=1 00:32:28.879 norandommap=1 00:32:28.879 numjobs=1 00:32:28.879 00:32:28.879 [job0] 00:32:28.879 filename=/dev/nvme0n1 00:32:28.879 [job1] 00:32:28.879 filename=/dev/nvme0n2 00:32:28.879 [job2] 00:32:28.879 filename=/dev/nvme0n3 00:32:28.879 [job3] 00:32:28.879 filename=/dev/nvme0n4 00:32:28.879 Could not set queue depth (nvme0n1) 00:32:28.879 Could not set queue depth (nvme0n2) 00:32:28.879 Could not set queue depth (nvme0n3) 00:32:28.879 Could not set queue depth (nvme0n4) 00:32:29.138 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:29.138 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:29.138 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:29.138 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:29.138 fio-3.35 00:32:29.138 Starting 4 threads 00:32:31.661 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:31.919 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:31.919 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13373440, buflen=4096 00:32:31.919 fio: pid=2327876, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:32.176 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=20008960, buflen=4096 00:32:32.176 fio: pid=2327871, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:32.176 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:32.176 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:32.434 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=315392, buflen=4096 00:32:32.434 fio: pid=2327840, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:32.434 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:32.434 19:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:32.434 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:32.434 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:32.434 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54546432, buflen=4096 00:32:32.434 fio: pid=2327857, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:32.692 00:32:32.692 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2327840: Thu Oct 17 19:39:56 2024 00:32:32.692 read: IOPS=25, BW=99.2KiB/s (102kB/s)(308KiB/3105msec) 00:32:32.692 slat (usec): min=8, max=4833, avg=83.93, stdev=544.77 00:32:32.692 clat (usec): min=273, max=41968, avg=39959.51, stdev=6521.60 00:32:32.692 lat (usec): min=295, max=45998, avg=40044.23, stdev=6556.33 00:32:32.692 clat percentiles (usec): 00:32:32.692 | 1.00th=[ 273], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:32.692 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:32.692 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:32.692 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:32.692 | 99.99th=[42206] 00:32:32.692 bw ( KiB/s): min= 93, max= 112, per=0.38%, avg=99.50, stdev= 7.15, samples=6 00:32:32.692 iops : min= 23, max= 28, avg=24.83, stdev= 1.83, samples=6 00:32:32.692 lat (usec) : 500=2.56% 00:32:32.692 lat (msec) : 50=96.15% 00:32:32.692 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:32:32.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:32.692 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2327857: Thu Oct 17 19:39:56 2024 00:32:32.692 read: IOPS=4006, BW=15.6MiB/s (16.4MB/s)(52.0MiB/3324msec) 00:32:32.692 slat (usec): min=5, max=27606, avg=13.99, stdev=343.66 00:32:32.692 clat (usec): min=175, max=40999, avg=233.05, stdev=612.00 00:32:32.692 lat (usec): min=181, max=41024, avg=247.04, stdev=704.03 00:32:32.692 clat percentiles (usec): 00:32:32.692 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 204], 00:32:32.692 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 219], 60.00th=[ 225], 00:32:32.692 | 70.00th=[ 235], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:32:32.692 | 99.00th=[ 314], 99.50th=[ 363], 99.90th=[ 570], 99.95th=[ 717], 00:32:32.692 | 99.99th=[41157] 00:32:32.692 bw ( KiB/s): min=15392, max=18384, per=65.68%, avg=17027.83, stdev=1072.54, samples=6 00:32:32.692 iops : min= 3848, max= 4596, avg=4256.83, stdev=268.04, samples=6 00:32:32.692 lat (usec) : 250=84.67%, 500=15.08%, 750=0.20%, 1000=0.02% 00:32:32.692 lat (msec) : 2=0.01%, 50=0.02% 00:32:32.692 cpu : usr=1.11%, sys=3.55%, ctx=13326, majf=0, minf=2 00:32:32.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 issued rwts: total=13318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:32.692 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2327871: Thu Oct 17 19:39:56 2024 00:32:32.692 read: IOPS=1690, BW=6761KiB/s (6924kB/s)(19.1MiB/2890msec) 00:32:32.692 slat (usec): min=6, max=11633, avg=11.26, stdev=192.46 00:32:32.692 clat (usec): min=198, max=49953, avg=574.96, stdev=3644.12 00:32:32.692 lat (usec): min=205, max=49975, avg=586.22, stdev=3650.23 00:32:32.692 clat percentiles (usec): 00:32:32.692 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 239], 20.00th=[ 243], 00:32:32.692 | 30.00th=[ 247], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:32:32.692 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 258], 95.00th=[ 262], 00:32:32.692 | 99.00th=[ 314], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:32.692 | 99.99th=[50070] 00:32:32.692 bw ( KiB/s): min= 96, max=15528, per=27.26%, avg=7068.80, stdev=6843.10, samples=5 00:32:32.692 iops : min= 24, max= 3882, avg=1767.20, stdev=1710.77, samples=5 00:32:32.692 lat (usec) : 250=56.55%, 500=42.63% 00:32:32.692 lat (msec) : 50=0.80% 00:32:32.692 cpu : usr=0.03%, sys=1.94%, ctx=4888, majf=0, minf=2 00:32:32.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 issued rwts: total=4886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:32.692 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2327876: Thu Oct 17 19:39:56 2024 00:32:32.692 read: IOPS=1210, BW=4841KiB/s (4957kB/s)(12.8MiB/2698msec) 00:32:32.692 slat (nsec): min=6618, max=43207, avg=8649.69, stdev=2078.47 00:32:32.692 clat (usec): min=199, max=41544, avg=809.71, stdev=4696.92 00:32:32.692 lat (usec): min=209, max=41552, avg=818.36, stdev=4698.10 00:32:32.692 clat percentiles (usec): 00:32:32.692 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 239], 00:32:32.692 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:32:32.692 | 70.00th=[ 260], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 343], 00:32:32.692 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:32.692 | 99.99th=[41681] 00:32:32.692 bw ( KiB/s): min= 96, max=14192, per=20.12%, avg=5216.00, stdev=6099.33, samples=5 00:32:32.692 iops : min= 24, max= 3548, avg=1304.00, stdev=1524.83, samples=5 00:32:32.692 lat (usec) : 250=50.61%, 500=47.98%, 750=0.03% 00:32:32.692 lat (msec) : 50=1.35% 00:32:32.692 cpu : usr=0.26%, sys=1.41%, ctx=3266, majf=0, minf=2 00:32:32.692 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.692 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.692 issued rwts: total=3266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.692 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:32.692 00:32:32.692 Run status group 0 (all jobs): 00:32:32.692 READ: bw=25.3MiB/s (26.5MB/s), 99.2KiB/s-15.6MiB/s (102kB/s-16.4MB/s), io=84.2MiB (88.2MB), run=2698-3324msec 00:32:32.692 00:32:32.692 Disk stats (read/write): 00:32:32.692 nvme0n1: ios=76/0, merge=0/0, ticks=3037/0, in_queue=3037, util=94.17% 00:32:32.692 nvme0n2: ios=13357/0, merge=0/0, ticks=4232/0, in_queue=4232, util=97.03% 00:32:32.692 nvme0n3: ios=4637/0, merge=0/0, ticks=2746/0, in_queue=2746, util=95.62% 00:32:32.692 nvme0n4: ios=3262/0, merge=0/0, ticks=2484/0, in_queue=2484, util=96.42% 00:32:32.692 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:32.692 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:32.950 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:32.950 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:33.207 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:33.207 19:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2327532 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:33.464 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:33.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:33.721 nvmf hotplug test: fio failed as expected 00:32:33.721 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.979 rmmod nvme_tcp 00:32:33.979 rmmod nvme_fabrics 00:32:33.979 rmmod nvme_keyring 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 2325041 ']' 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 2325041 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2325041 ']' 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2325041 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2325041 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2325041' 00:32:33.979 killing process with pid 2325041 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2325041 00:32:33.979 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2325041 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.239 19:39:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.144 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:36.403 00:32:36.403 real 0m25.780s 00:32:36.403 user 1m31.017s 00:32:36.403 sys 0m11.306s 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.403 ************************************ 00:32:36.403 END TEST nvmf_fio_target 00:32:36.403 ************************************ 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:36.403 19:39:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:36.403 ************************************ 00:32:36.403 START TEST nvmf_bdevio 00:32:36.403 ************************************ 00:32:36.403 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:36.403 * Looking for test storage... 00:32:36.404 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:36.404 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:36.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.664 --rc genhtml_branch_coverage=1 00:32:36.664 --rc genhtml_function_coverage=1 00:32:36.664 --rc genhtml_legend=1 00:32:36.664 --rc geninfo_all_blocks=1 00:32:36.664 --rc geninfo_unexecuted_blocks=1 00:32:36.664 00:32:36.664 ' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:36.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.664 --rc genhtml_branch_coverage=1 00:32:36.664 --rc genhtml_function_coverage=1 00:32:36.664 --rc genhtml_legend=1 00:32:36.664 --rc geninfo_all_blocks=1 00:32:36.664 --rc geninfo_unexecuted_blocks=1 00:32:36.664 00:32:36.664 ' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:36.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.664 --rc genhtml_branch_coverage=1 00:32:36.664 --rc genhtml_function_coverage=1 00:32:36.664 --rc genhtml_legend=1 00:32:36.664 --rc geninfo_all_blocks=1 00:32:36.664 --rc geninfo_unexecuted_blocks=1 00:32:36.664 00:32:36.664 ' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:36.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:36.664 --rc genhtml_branch_coverage=1 00:32:36.664 --rc genhtml_function_coverage=1 00:32:36.664 --rc genhtml_legend=1 00:32:36.664 --rc geninfo_all_blocks=1 00:32:36.664 --rc geninfo_unexecuted_blocks=1 00:32:36.664 00:32:36.664 ' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:36.664 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:36.665 19:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.236 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:43.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:43.237 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:43.237 Found net devices under 0000:86:00.0: cvl_0_0 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:43.237 Found net devices under 0000:86:00.1: cvl_0_1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.237 19:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:32:43.237 00:32:43.237 --- 10.0.0.2 ping statistics --- 00:32:43.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.237 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:43.237 00:32:43.237 --- 10.0.0.1 ping statistics --- 00:32:43.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.237 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=2332103 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 2332103 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2332103 ']' 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:43.237 19:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.237 [2024-10-17 19:40:06.165672] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.237 [2024-10-17 19:40:06.166590] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:32:43.237 [2024-10-17 19:40:06.166627] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.237 [2024-10-17 19:40:06.247662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:43.238 [2024-10-17 19:40:06.288581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.238 [2024-10-17 19:40:06.288622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.238 [2024-10-17 19:40:06.288629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.238 [2024-10-17 19:40:06.288635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.238 [2024-10-17 19:40:06.288640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.238 [2024-10-17 19:40:06.290179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:43.238 [2024-10-17 19:40:06.290215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:43.238 [2024-10-17 19:40:06.290321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:43.238 [2024-10-17 19:40:06.290323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:43.238 [2024-10-17 19:40:06.356121] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.238 [2024-10-17 19:40:06.356793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.238 [2024-10-17 19:40:06.356979] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:43.238 [2024-10-17 19:40:06.357296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:43.238 [2024-10-17 19:40:06.357359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:43.238 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:43.238 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:32:43.238 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:43.238 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.238 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 [2024-10-17 19:40:07.047146] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 Malloc0 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:43.498 [2024-10-17 19:40:07.131437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:32:43.498 { 00:32:43.498 "params": { 00:32:43.498 "name": "Nvme$subsystem", 00:32:43.498 "trtype": "$TEST_TRANSPORT", 00:32:43.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:43.498 "adrfam": "ipv4", 00:32:43.498 "trsvcid": "$NVMF_PORT", 00:32:43.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:43.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:43.498 "hdgst": ${hdgst:-false}, 00:32:43.498 "ddgst": ${ddgst:-false} 00:32:43.498 }, 00:32:43.498 "method": "bdev_nvme_attach_controller" 00:32:43.498 } 00:32:43.498 EOF 00:32:43.498 )") 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:32:43.498 19:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:32:43.498 "params": { 00:32:43.498 "name": "Nvme1", 00:32:43.498 "trtype": "tcp", 00:32:43.498 "traddr": "10.0.0.2", 00:32:43.498 "adrfam": "ipv4", 00:32:43.498 "trsvcid": "4420", 00:32:43.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:43.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:43.498 "hdgst": false, 00:32:43.498 "ddgst": false 00:32:43.498 }, 00:32:43.498 "method": "bdev_nvme_attach_controller" 00:32:43.498 }' 00:32:43.498 [2024-10-17 19:40:07.184586] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:32:43.498 [2024-10-17 19:40:07.184661] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2332198 ] 00:32:43.498 [2024-10-17 19:40:07.263999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:43.757 [2024-10-17 19:40:07.307820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.757 [2024-10-17 19:40:07.307926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.757 [2024-10-17 19:40:07.307926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:44.015 I/O targets: 00:32:44.015 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:44.015 00:32:44.015 00:32:44.015 CUnit - A unit testing framework for C - Version 2.1-3 00:32:44.015 http://cunit.sourceforge.net/ 00:32:44.015 00:32:44.015 00:32:44.015 Suite: bdevio tests on: Nvme1n1 00:32:44.015 Test: blockdev write read block ...passed 00:32:44.015 Test: blockdev write zeroes read block ...passed 00:32:44.015 Test: blockdev write zeroes read no split ...passed 00:32:44.016 Test: blockdev write zeroes read split ...passed 00:32:44.016 Test: blockdev write zeroes read split partial ...passed 00:32:44.016 Test: blockdev reset ...[2024-10-17 19:40:07.769118] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:32:44.016 [2024-10-17 19:40:07.769180] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13923c0 (9): Bad file descriptor 00:32:44.275 [2024-10-17 19:40:07.861558] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:32:44.275 passed 00:32:44.275 Test: blockdev write read 8 blocks ...passed 00:32:44.275 Test: blockdev write read size > 128k ...passed 00:32:44.275 Test: blockdev write read invalid size ...passed 00:32:44.275 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:44.275 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:44.275 Test: blockdev write read max offset ...passed 00:32:44.275 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:44.534 Test: blockdev writev readv 8 blocks ...passed 00:32:44.534 Test: blockdev writev readv 30 x 1block ...passed 00:32:44.534 Test: blockdev writev readv block ...passed 00:32:44.534 Test: blockdev writev readv size > 128k ...passed 00:32:44.534 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:44.534 Test: blockdev comparev and writev ...[2024-10-17 19:40:08.112457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.112485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.112498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.112506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.112806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.112818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.112830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.112837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.113124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.113134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.113145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.113153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.113446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.113468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:44.534 [2024-10-17 19:40:08.113474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:44.534 passed 00:32:44.534 Test: blockdev nvme passthru rw ...passed 00:32:44.534 Test: blockdev nvme passthru vendor specific ...[2024-10-17 19:40:08.195933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:44.534 [2024-10-17 19:40:08.195951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.196064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:44.534 [2024-10-17 19:40:08.196073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.196182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:44.534 [2024-10-17 19:40:08.196191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:44.534 [2024-10-17 19:40:08.196299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:44.534 [2024-10-17 19:40:08.196309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:44.534 passed 00:32:44.534 Test: blockdev nvme admin passthru ...passed 00:32:44.534 Test: blockdev copy ...passed 00:32:44.534 00:32:44.534 Run Summary: Type Total Ran Passed Failed Inactive 00:32:44.534 suites 1 1 n/a 0 0 00:32:44.534 tests 23 23 23 0 0 00:32:44.534 asserts 152 152 152 0 n/a 00:32:44.534 00:32:44.534 Elapsed time = 1.273 seconds 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:44.793 rmmod nvme_tcp 00:32:44.793 rmmod nvme_fabrics 00:32:44.793 rmmod nvme_keyring 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 2332103 ']' 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 2332103 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2332103 ']' 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2332103 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2332103 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2332103' 00:32:44.793 killing process with pid 2332103 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2332103 00:32:44.793 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2332103 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.053 19:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:47.589 00:32:47.589 real 0m10.745s 00:32:47.589 user 0m10.037s 00:32:47.589 sys 0m5.291s 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:47.589 ************************************ 00:32:47.589 END TEST nvmf_bdevio 00:32:47.589 ************************************ 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:47.589 00:32:47.589 real 4m34.488s 00:32:47.589 user 9m7.287s 00:32:47.589 sys 1m52.907s 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:47.589 19:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:47.589 ************************************ 00:32:47.589 END TEST nvmf_target_core_interrupt_mode 00:32:47.589 ************************************ 00:32:47.589 19:40:10 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:47.589 19:40:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:32:47.589 19:40:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:47.589 19:40:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:47.589 ************************************ 00:32:47.589 START TEST nvmf_interrupt 00:32:47.589 ************************************ 00:32:47.589 19:40:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:47.589 * Looking for test storage... 00:32:47.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:47.589 19:40:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:47.589 19:40:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:32:47.589 19:40:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:47.589 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:47.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.590 --rc genhtml_branch_coverage=1 00:32:47.590 --rc genhtml_function_coverage=1 00:32:47.590 --rc genhtml_legend=1 00:32:47.590 --rc geninfo_all_blocks=1 00:32:47.590 --rc geninfo_unexecuted_blocks=1 00:32:47.590 00:32:47.590 ' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:47.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.590 --rc genhtml_branch_coverage=1 00:32:47.590 --rc genhtml_function_coverage=1 00:32:47.590 --rc genhtml_legend=1 00:32:47.590 --rc geninfo_all_blocks=1 00:32:47.590 --rc geninfo_unexecuted_blocks=1 00:32:47.590 00:32:47.590 ' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:47.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.590 --rc genhtml_branch_coverage=1 00:32:47.590 --rc genhtml_function_coverage=1 00:32:47.590 --rc genhtml_legend=1 00:32:47.590 --rc geninfo_all_blocks=1 00:32:47.590 --rc geninfo_unexecuted_blocks=1 00:32:47.590 00:32:47.590 ' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:47.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:47.590 --rc genhtml_branch_coverage=1 00:32:47.590 --rc genhtml_function_coverage=1 00:32:47.590 --rc genhtml_legend=1 00:32:47.590 --rc geninfo_all_blocks=1 00:32:47.590 --rc geninfo_unexecuted_blocks=1 00:32:47.590 00:32:47.590 ' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # prepare_net_devs 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@436 -- # local -g is_hw=no 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # remove_spdk_ns 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:47.590 19:40:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:54.162 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:54.162 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:54.162 Found net devices under 0000:86:00.0: cvl_0_0 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ up == up ]] 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:54.162 Found net devices under 0000:86:00.1: cvl_0_1 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:32:54.162 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # is_hw=yes 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:54.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:54.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:32:54.163 00:32:54.163 --- 10.0.0.2 ping statistics --- 00:32:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.163 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:54.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:54.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:54.163 00:32:54.163 --- 10.0.0.1 ping statistics --- 00:32:54.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.163 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@448 -- # return 0 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:32:54.163 19:40:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # nvmfpid=2335905 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # waitforlisten 2335905 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@831 -- # '[' -z 2335905 ']' 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 [2024-10-17 19:40:17.073946] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:54.163 [2024-10-17 19:40:17.074859] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:32:54.163 [2024-10-17 19:40:17.074891] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.163 [2024-10-17 19:40:17.154276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:54.163 [2024-10-17 19:40:17.195096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.163 [2024-10-17 19:40:17.195131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.163 [2024-10-17 19:40:17.195138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.163 [2024-10-17 19:40:17.195144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.163 [2024-10-17 19:40:17.195149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.163 [2024-10-17 19:40:17.196271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.163 [2024-10-17 19:40:17.196274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.163 [2024-10-17 19:40:17.261657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:54.163 [2024-10-17 19:40:17.262130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:54.163 [2024-10-17 19:40:17.262382] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # return 0 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:54.163 5000+0 records in 00:32:54.163 5000+0 records out 00:32:54.163 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0185061 s, 553 MB/s 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 AIO0 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 [2024-10-17 19:40:17.401040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:54.163 [2024-10-17 19:40:17.437333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2335905 0 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 0 idle 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:54.163 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335905 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335905 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2335905 1 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 1 idle 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335909 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335909 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2336165 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2335905 0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2335905 0 busy 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:32:54.164 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335905 root 20 0 128.2g 46848 33792 R 12.5 0.0 0:00.27 reactor_0' 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335905 root 20 0 128.2g 46848 33792 R 12.5 0.0 0:00.27 reactor_0 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=12.5 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=12 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:54.423 19:40:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:55.358 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:55.358 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:55.358 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:32:55.358 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335905 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.63 reactor_0' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335905 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:02.63 reactor_0 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2335905 1 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2335905 1 busy 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335909 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.38 reactor_1' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335909 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:01.38 reactor_1 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:55.617 19:40:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2336165 00:33:05.597 [2024-10-17 19:40:28.016437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x68a880 is same with the state(6) to be set 00:33:05.597 Initializing NVMe Controllers 00:33:05.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:05.597 Controller IO queue size 256, less than required. 00:33:05.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:05.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:05.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:05.597 Initialization complete. Launching workers. 00:33:05.597 ======================================================== 00:33:05.597 Latency(us) 00:33:05.597 Device Information : IOPS MiB/s Average min max 00:33:05.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15872.70 62.00 16137.81 2774.18 30481.40 00:33:05.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16371.20 63.95 15643.40 7595.75 56651.51 00:33:05.597 ======================================================== 00:33:05.597 Total : 32243.90 125.95 15886.78 2774.18 56651.51 00:33:05.597 00:33:05.597 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:05.597 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2335905 0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 0 idle 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335905 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335905 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.24 reactor_0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2335905 1 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 1 idle 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335909 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335909 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1198 -- # local i=0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:33:05.598 19:40:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1205 -- # sleep 2 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # return 0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2335905 0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 0 idle 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335905 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.47 reactor_0' 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335905 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.47 reactor_0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:07.505 19:40:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2335905 1 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2335905 1 idle 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2335905 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2335905 -w 256 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2335909 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1' 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2335909 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:07.505 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:07.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1219 -- # local i=0 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # return 0 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.765 rmmod nvme_tcp 00:33:07.765 rmmod nvme_fabrics 00:33:07.765 rmmod nvme_keyring 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@515 -- # '[' -n 2335905 ']' 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # killprocess 2335905 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@950 -- # '[' -z 2335905 ']' 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # kill -0 2335905 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # uname 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2335905 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2335905' 00:33:07.765 killing process with pid 2335905 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # kill 2335905 00:33:07.765 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@974 -- # wait 2335905 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-save 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@789 -- # iptables-restore 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:08.025 19:40:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.932 19:40:33 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:10.192 00:33:10.192 real 0m22.858s 00:33:10.192 user 0m39.698s 00:33:10.192 sys 0m8.506s 00:33:10.192 19:40:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.192 19:40:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:10.192 ************************************ 00:33:10.192 END TEST nvmf_interrupt 00:33:10.192 ************************************ 00:33:10.192 00:33:10.192 real 27m13.514s 00:33:10.192 user 56m18.479s 00:33:10.192 sys 9m16.109s 00:33:10.192 19:40:33 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:10.192 19:40:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.192 ************************************ 00:33:10.192 END TEST nvmf_tcp 00:33:10.192 ************************************ 00:33:10.192 19:40:33 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:33:10.192 19:40:33 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:10.192 19:40:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:10.192 19:40:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:10.192 19:40:33 -- common/autotest_common.sh@10 -- # set +x 00:33:10.192 ************************************ 00:33:10.192 START TEST spdkcli_nvmf_tcp 00:33:10.192 ************************************ 00:33:10.192 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:10.192 * Looking for test storage... 00:33:10.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:10.192 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:10.192 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:33:10.192 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.452 --rc genhtml_branch_coverage=1 00:33:10.452 --rc genhtml_function_coverage=1 00:33:10.452 --rc genhtml_legend=1 00:33:10.452 --rc geninfo_all_blocks=1 00:33:10.452 --rc geninfo_unexecuted_blocks=1 00:33:10.452 00:33:10.452 ' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.452 --rc genhtml_branch_coverage=1 00:33:10.452 --rc genhtml_function_coverage=1 00:33:10.452 --rc genhtml_legend=1 00:33:10.452 --rc geninfo_all_blocks=1 00:33:10.452 --rc geninfo_unexecuted_blocks=1 00:33:10.452 00:33:10.452 ' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.452 --rc genhtml_branch_coverage=1 00:33:10.452 --rc genhtml_function_coverage=1 00:33:10.452 --rc genhtml_legend=1 00:33:10.452 --rc geninfo_all_blocks=1 00:33:10.452 --rc geninfo_unexecuted_blocks=1 00:33:10.452 00:33:10.452 ' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:10.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:10.452 --rc genhtml_branch_coverage=1 00:33:10.452 --rc genhtml_function_coverage=1 00:33:10.452 --rc genhtml_legend=1 00:33:10.452 --rc geninfo_all_blocks=1 00:33:10.452 --rc geninfo_unexecuted_blocks=1 00:33:10.452 00:33:10.452 ' 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:10.452 19:40:33 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.452 19:40:34 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:10.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2338858 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2338858 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2338858 ']' 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:10.453 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.453 [2024-10-17 19:40:34.079248] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:33:10.453 [2024-10-17 19:40:34.079297] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2338858 ] 00:33:10.453 [2024-10-17 19:40:34.152784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:10.453 [2024-10-17 19:40:34.193200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.453 [2024-10-17 19:40:34.193200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.712 19:40:34 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:10.712 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:10.712 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:10.712 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:10.712 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:10.712 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:10.712 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:10.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:10.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:10.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:10.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:10.712 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:10.712 ' 00:33:13.248 [2024-10-17 19:40:37.021982] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.623 [2024-10-17 19:40:38.366444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:17.156 [2024-10-17 19:40:40.842063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:19.690 [2024-10-17 19:40:42.996631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:21.068 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:21.068 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:21.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:21.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:21.068 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:21.068 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:21.068 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:21.068 19:40:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:21.636 19:40:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:21.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:21.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:21.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:21.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:21.636 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:21.636 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:21.636 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:21.636 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:21.636 ' 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:27.055 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:27.055 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:27.055 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:27.055 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2338858 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2338858 ']' 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2338858 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2338858 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2338858' 00:33:27.314 killing process with pid 2338858 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2338858 00:33:27.314 19:40:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2338858 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2338858 ']' 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2338858 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2338858 ']' 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2338858 00:33:27.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2338858) - No such process 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2338858 is not found' 00:33:27.573 Process with pid 2338858 is not found 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:27.573 00:33:27.573 real 0m17.313s 00:33:27.573 user 0m38.149s 00:33:27.573 sys 0m0.754s 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:27.573 19:40:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:27.573 ************************************ 00:33:27.573 END TEST spdkcli_nvmf_tcp 00:33:27.573 ************************************ 00:33:27.573 19:40:51 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:27.573 19:40:51 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:33:27.573 19:40:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:27.573 19:40:51 -- common/autotest_common.sh@10 -- # set +x 00:33:27.573 ************************************ 00:33:27.573 START TEST nvmf_identify_passthru 00:33:27.573 ************************************ 00:33:27.573 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:27.573 * Looking for test storage... 00:33:27.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.573 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:27.573 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:33:27.573 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:27.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.834 --rc genhtml_branch_coverage=1 00:33:27.834 --rc genhtml_function_coverage=1 00:33:27.834 --rc genhtml_legend=1 00:33:27.834 --rc geninfo_all_blocks=1 00:33:27.834 --rc geninfo_unexecuted_blocks=1 00:33:27.834 00:33:27.834 ' 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:27.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.834 --rc genhtml_branch_coverage=1 00:33:27.834 --rc genhtml_function_coverage=1 00:33:27.834 --rc genhtml_legend=1 00:33:27.834 --rc geninfo_all_blocks=1 00:33:27.834 --rc geninfo_unexecuted_blocks=1 00:33:27.834 00:33:27.834 ' 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:27.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.834 --rc genhtml_branch_coverage=1 00:33:27.834 --rc genhtml_function_coverage=1 00:33:27.834 --rc genhtml_legend=1 00:33:27.834 --rc geninfo_all_blocks=1 00:33:27.834 --rc geninfo_unexecuted_blocks=1 00:33:27.834 00:33:27.834 ' 00:33:27.834 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:27.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.834 --rc genhtml_branch_coverage=1 00:33:27.834 --rc genhtml_function_coverage=1 00:33:27.834 --rc genhtml_legend=1 00:33:27.834 --rc geninfo_all_blocks=1 00:33:27.834 --rc geninfo_unexecuted_blocks=1 00:33:27.834 00:33:27.834 ' 00:33:27.834 19:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.834 19:40:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.834 19:40:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.834 19:40:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.834 19:40:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:27.834 19:40:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:27.834 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.834 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.834 19:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.834 19:40:51 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.835 19:40:51 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.835 19:40:51 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.835 19:40:51 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.835 19:40:51 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:27.835 19:40:51 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.835 19:40:51 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.835 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:27.835 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:27.835 19:40:51 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.835 19:40:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:34.408 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:34.408 19:40:56 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:34.408 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:34.408 Found net devices under 0000:86:00.0: cvl_0_0 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:34.408 Found net devices under 0000:86:00.1: cvl_0_1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@440 -- # is_hw=yes 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:34.408 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:34.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:33:34.409 00:33:34.409 --- 10.0.0.2 ping statistics --- 00:33:34.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.409 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:34.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:33:34.409 00:33:34.409 --- 10.0.0.1 ping statistics --- 00:33:34.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.409 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@448 -- # return 0 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:33:34.409 19:40:57 nvmf_identify_passthru -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 00:33:34.409 19:40:57 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:5e:00.0 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:34.409 19:40:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:38.600 19:41:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:33:38.600 19:41:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:38.600 19:41:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:38.600 19:41:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2346197 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:43.872 19:41:06 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2346197 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2346197 ']' 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:33:43.872 19:41:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:43.872 [2024-10-17 19:41:06.996357] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:33:43.872 [2024-10-17 19:41:06.996407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.872 [2024-10-17 19:41:07.078758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:43.872 [2024-10-17 19:41:07.121318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.872 [2024-10-17 19:41:07.121357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.872 [2024-10-17 19:41:07.121364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.872 [2024-10-17 19:41:07.121371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.872 [2024-10-17 19:41:07.121376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.872 [2024-10-17 19:41:07.122813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:43.872 [2024-10-17 19:41:07.122919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:43.872 [2024-10-17 19:41:07.123028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.872 [2024-10-17 19:41:07.123029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:33:44.131 19:41:07 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:44.131 INFO: Log level set to 20 00:33:44.131 INFO: Requests: 00:33:44.131 { 00:33:44.131 "jsonrpc": "2.0", 00:33:44.131 "method": "nvmf_set_config", 00:33:44.131 "id": 1, 00:33:44.131 "params": { 00:33:44.131 "admin_cmd_passthru": { 00:33:44.131 "identify_ctrlr": true 00:33:44.131 } 00:33:44.131 } 00:33:44.131 } 00:33:44.131 00:33:44.131 INFO: response: 00:33:44.131 { 00:33:44.131 "jsonrpc": "2.0", 00:33:44.131 "id": 1, 00:33:44.131 "result": true 00:33:44.131 } 00:33:44.131 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.131 19:41:07 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.131 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:44.131 INFO: Setting log level to 20 00:33:44.131 INFO: Setting log level to 20 00:33:44.131 INFO: Log level set to 20 00:33:44.131 INFO: Log level set to 20 00:33:44.131 INFO: Requests: 00:33:44.131 { 00:33:44.131 "jsonrpc": "2.0", 00:33:44.131 "method": "framework_start_init", 00:33:44.131 "id": 1 00:33:44.131 } 00:33:44.131 00:33:44.131 INFO: Requests: 00:33:44.131 { 00:33:44.131 "jsonrpc": "2.0", 00:33:44.131 "method": "framework_start_init", 00:33:44.131 "id": 1 00:33:44.131 } 00:33:44.131 00:33:44.131 [2024-10-17 19:41:07.915192] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:44.390 INFO: response: 00:33:44.390 { 00:33:44.390 "jsonrpc": "2.0", 00:33:44.390 "id": 1, 00:33:44.390 "result": true 00:33:44.390 } 00:33:44.390 00:33:44.390 INFO: response: 00:33:44.390 { 00:33:44.390 "jsonrpc": "2.0", 00:33:44.390 "id": 1, 00:33:44.390 "result": true 00:33:44.390 } 00:33:44.390 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.390 19:41:07 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:44.390 INFO: Setting log level to 40 00:33:44.390 INFO: Setting log level to 40 00:33:44.390 INFO: Setting log level to 40 00:33:44.390 [2024-10-17 19:41:07.928518] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:44.390 19:41:07 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:44.390 19:41:07 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:44.390 19:41:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 Nvme0n1 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 [2024-10-17 19:41:10.846621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 [ 00:33:47.679 { 00:33:47.679 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:47.679 "subtype": "Discovery", 00:33:47.679 "listen_addresses": [], 00:33:47.679 "allow_any_host": true, 00:33:47.679 "hosts": [] 00:33:47.679 }, 00:33:47.679 { 00:33:47.679 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:47.679 "subtype": "NVMe", 00:33:47.679 "listen_addresses": [ 00:33:47.679 { 00:33:47.679 "trtype": "TCP", 00:33:47.679 "adrfam": "IPv4", 00:33:47.679 "traddr": "10.0.0.2", 00:33:47.679 "trsvcid": "4420" 00:33:47.679 } 00:33:47.679 ], 00:33:47.679 "allow_any_host": true, 00:33:47.679 "hosts": [], 00:33:47.679 "serial_number": "SPDK00000000000001", 00:33:47.679 "model_number": "SPDK bdev Controller", 00:33:47.679 "max_namespaces": 1, 00:33:47.679 "min_cntlid": 1, 00:33:47.679 "max_cntlid": 65519, 00:33:47.679 "namespaces": [ 00:33:47.679 { 00:33:47.679 "nsid": 1, 00:33:47.679 "bdev_name": "Nvme0n1", 00:33:47.679 "name": "Nvme0n1", 00:33:47.679 "nguid": "3E575B876F13426E929E6116B27CC394", 00:33:47.679 "uuid": "3e575b87-6f13-426e-929e-6116b27cc394" 00:33:47.679 } 00:33:47.679 ] 00:33:47.679 } 00:33:47.679 ] 00:33:47.679 19:41:10 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:47.679 19:41:10 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:47.679 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:47.679 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:47.679 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:47.679 19:41:11 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@514 -- # nvmfcleanup 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:47.679 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:47.679 rmmod nvme_tcp 00:33:47.679 rmmod nvme_fabrics 00:33:47.938 rmmod nvme_keyring 00:33:47.938 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:47.938 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:47.938 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:47.938 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@515 -- # '[' -n 2346197 ']' 00:33:47.938 19:41:11 nvmf_identify_passthru -- nvmf/common.sh@516 -- # killprocess 2346197 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2346197 ']' 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2346197 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2346197 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2346197' 00:33:47.938 killing process with pid 2346197 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2346197 00:33:47.938 19:41:11 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2346197 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-save 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@789 -- # iptables-restore 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.843 19:41:13 nvmf_identify_passthru -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.843 19:41:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:49.843 19:41:13 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.379 19:41:15 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:52.379 00:33:52.379 real 0m24.357s 00:33:52.379 user 0m33.384s 00:33:52.379 sys 0m6.307s 00:33:52.379 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:33:52.379 19:41:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:52.379 ************************************ 00:33:52.379 END TEST nvmf_identify_passthru 00:33:52.379 ************************************ 00:33:52.379 19:41:15 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:52.379 19:41:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:33:52.379 19:41:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:33:52.379 19:41:15 -- common/autotest_common.sh@10 -- # set +x 00:33:52.379 ************************************ 00:33:52.379 START TEST nvmf_dif 00:33:52.379 ************************************ 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:52.379 * Looking for test storage... 00:33:52.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:52.379 19:41:15 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.379 --rc genhtml_branch_coverage=1 00:33:52.379 --rc genhtml_function_coverage=1 00:33:52.379 --rc genhtml_legend=1 00:33:52.379 --rc geninfo_all_blocks=1 00:33:52.379 --rc geninfo_unexecuted_blocks=1 00:33:52.379 00:33:52.379 ' 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.379 --rc genhtml_branch_coverage=1 00:33:52.379 --rc genhtml_function_coverage=1 00:33:52.379 --rc genhtml_legend=1 00:33:52.379 --rc geninfo_all_blocks=1 00:33:52.379 --rc geninfo_unexecuted_blocks=1 00:33:52.379 00:33:52.379 ' 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.379 --rc genhtml_branch_coverage=1 00:33:52.379 --rc genhtml_function_coverage=1 00:33:52.379 --rc genhtml_legend=1 00:33:52.379 --rc geninfo_all_blocks=1 00:33:52.379 --rc geninfo_unexecuted_blocks=1 00:33:52.379 00:33:52.379 ' 00:33:52.379 19:41:15 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:52.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:52.379 --rc genhtml_branch_coverage=1 00:33:52.379 --rc genhtml_function_coverage=1 00:33:52.379 --rc genhtml_legend=1 00:33:52.379 --rc geninfo_all_blocks=1 00:33:52.379 --rc geninfo_unexecuted_blocks=1 00:33:52.379 00:33:52.379 ' 00:33:52.379 19:41:15 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:52.379 19:41:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:52.380 19:41:15 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:52.380 19:41:15 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.380 19:41:15 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.380 19:41:15 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.380 19:41:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.380 19:41:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.380 19:41:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.380 19:41:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:52.380 19:41:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:52.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:52.380 19:41:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:52.380 19:41:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:52.380 19:41:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:52.380 19:41:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:52.380 19:41:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@474 -- # prepare_net_devs 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@436 -- # local -g is_hw=no 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@438 -- # remove_spdk_ns 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.380 19:41:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:52.380 19:41:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:33:52.380 19:41:15 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:52.380 19:41:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:57.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:57.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:57.655 Found net devices under 0000:86:00.0: cvl_0_0 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ up == up ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:57.655 Found net devices under 0000:86:00.1: cvl_0_1 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@440 -- # is_hw=yes 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:57.655 19:41:21 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:57.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:57.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:33:57.915 00:33:57.915 --- 10.0.0.2 ping statistics --- 00:33:57.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.915 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:57.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:57.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:33:57.915 00:33:57.915 --- 10.0.0.1 ping statistics --- 00:33:57.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:57.915 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@448 -- # return 0 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:33:57.915 19:41:21 nvmf_dif -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:01.204 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:01.204 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:01.204 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:34:01.204 19:41:24 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:01.204 19:41:24 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@507 -- # nvmfpid=2351942 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@508 -- # waitforlisten 2351942 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2351942 ']' 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 [2024-10-17 19:41:24.618116] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:34:01.204 [2024-10-17 19:41:24.618159] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.204 [2024-10-17 19:41:24.697009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.204 [2024-10-17 19:41:24.737756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.204 [2024-10-17 19:41:24.737803] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.204 [2024-10-17 19:41:24.737811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.204 [2024-10-17 19:41:24.737817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.204 [2024-10-17 19:41:24.737823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.204 [2024-10-17 19:41:24.738382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 19:41:24 nvmf_dif -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.204 19:41:24 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:01.204 19:41:24 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 [2024-10-17 19:41:24.869450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.204 19:41:24 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 ************************************ 00:34:01.204 START TEST fio_dif_1_default 00:34:01.204 ************************************ 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 bdev_null0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:01.204 [2024-10-17 19:41:24.941752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config=() 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # local subsystem config 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:01.204 { 00:34:01.204 "params": { 00:34:01.204 "name": "Nvme$subsystem", 00:34:01.204 "trtype": "$TEST_TRANSPORT", 00:34:01.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:01.204 "adrfam": "ipv4", 00:34:01.204 "trsvcid": "$NVMF_PORT", 00:34:01.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:01.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:01.204 "hdgst": ${hdgst:-false}, 00:34:01.204 "ddgst": ${ddgst:-false} 00:34:01.204 }, 00:34:01.204 "method": "bdev_nvme_attach_controller" 00:34:01.204 } 00:34:01.204 EOF 00:34:01.204 )") 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # cat 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # jq . 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@583 -- # IFS=, 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:01.204 "params": { 00:34:01.204 "name": "Nvme0", 00:34:01.204 "trtype": "tcp", 00:34:01.204 "traddr": "10.0.0.2", 00:34:01.204 "adrfam": "ipv4", 00:34:01.204 "trsvcid": "4420", 00:34:01.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:01.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:01.204 "hdgst": false, 00:34:01.204 "ddgst": false 00:34:01.204 }, 00:34:01.204 "method": "bdev_nvme_attach_controller" 00:34:01.204 }' 00:34:01.204 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:01.497 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:01.497 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:01.497 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:01.497 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:01.497 19:41:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:01.497 19:41:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:01.497 19:41:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:01.497 19:41:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:01.497 19:41:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:01.761 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:01.761 fio-3.35 00:34:01.761 Starting 1 thread 00:34:13.966 00:34:13.966 filename0: (groupid=0, jobs=1): err= 0: pid=2352203: Thu Oct 17 19:41:35 2024 00:34:13.966 read: IOPS=210, BW=841KiB/s (862kB/s)(8432KiB/10021msec) 00:34:13.966 slat (nsec): min=5823, max=26321, avg=6151.61, stdev=1179.77 00:34:13.966 clat (usec): min=366, max=42582, avg=18997.70, stdev=20286.20 00:34:13.966 lat (usec): min=372, max=42588, avg=19003.86, stdev=20286.15 00:34:13.966 clat percentiles (usec): 00:34:13.966 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 392], 20.00th=[ 404], 00:34:13.966 | 30.00th=[ 412], 40.00th=[ 453], 50.00th=[ 586], 60.00th=[40633], 00:34:13.966 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:34:13.966 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:13.966 | 99.99th=[42730] 00:34:13.966 bw ( KiB/s): min= 736, max= 1024, per=99.95%, avg=841.60, stdev=84.41, samples=20 00:34:13.966 iops : min= 184, max= 256, avg=210.40, stdev=21.10, samples=20 00:34:13.966 lat (usec) : 500=44.73%, 750=9.54% 00:34:13.966 lat (msec) : 10=0.19%, 50=45.54% 00:34:13.966 cpu : usr=91.73%, sys=8.03%, ctx=10, majf=0, minf=0 00:34:13.966 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.966 issued rwts: total=2108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.966 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:13.966 00:34:13.966 Run status group 0 (all jobs): 00:34:13.966 READ: bw=841KiB/s (862kB/s), 841KiB/s-841KiB/s (862kB/s-862kB/s), io=8432KiB (8634kB), run=10021-10021msec 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 00:34:13.966 real 0m11.202s 00:34:13.966 user 0m16.116s 00:34:13.966 sys 0m1.092s 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 ************************************ 00:34:13.966 END TEST fio_dif_1_default 00:34:13.966 ************************************ 00:34:13.966 19:41:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:13.966 19:41:36 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:13.966 19:41:36 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 ************************************ 00:34:13.966 START TEST fio_dif_1_multi_subsystems 00:34:13.966 ************************************ 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 bdev_null0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 [2024-10-17 19:41:36.214915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 bdev_null1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.966 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config=() 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # local subsystem config 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:13.967 { 00:34:13.967 "params": { 00:34:13.967 "name": "Nvme$subsystem", 00:34:13.967 "trtype": "$TEST_TRANSPORT", 00:34:13.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.967 "adrfam": "ipv4", 00:34:13.967 "trsvcid": "$NVMF_PORT", 00:34:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.967 "hdgst": ${hdgst:-false}, 00:34:13.967 "ddgst": ${ddgst:-false} 00:34:13.967 }, 00:34:13.967 "method": "bdev_nvme_attach_controller" 00:34:13.967 } 00:34:13.967 EOF 00:34:13.967 )") 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:13.967 { 00:34:13.967 "params": { 00:34:13.967 "name": "Nvme$subsystem", 00:34:13.967 "trtype": "$TEST_TRANSPORT", 00:34:13.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.967 "adrfam": "ipv4", 00:34:13.967 "trsvcid": "$NVMF_PORT", 00:34:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.967 "hdgst": ${hdgst:-false}, 00:34:13.967 "ddgst": ${ddgst:-false} 00:34:13.967 }, 00:34:13.967 "method": "bdev_nvme_attach_controller" 00:34:13.967 } 00:34:13.967 EOF 00:34:13.967 )") 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # cat 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # jq . 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@583 -- # IFS=, 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:13.967 "params": { 00:34:13.967 "name": "Nvme0", 00:34:13.967 "trtype": "tcp", 00:34:13.967 "traddr": "10.0.0.2", 00:34:13.967 "adrfam": "ipv4", 00:34:13.967 "trsvcid": "4420", 00:34:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:13.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:13.967 "hdgst": false, 00:34:13.967 "ddgst": false 00:34:13.967 }, 00:34:13.967 "method": "bdev_nvme_attach_controller" 00:34:13.967 },{ 00:34:13.967 "params": { 00:34:13.967 "name": "Nvme1", 00:34:13.967 "trtype": "tcp", 00:34:13.967 "traddr": "10.0.0.2", 00:34:13.967 "adrfam": "ipv4", 00:34:13.967 "trsvcid": "4420", 00:34:13.967 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:13.967 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:13.967 "hdgst": false, 00:34:13.967 "ddgst": false 00:34:13.967 }, 00:34:13.967 "method": "bdev_nvme_attach_controller" 00:34:13.967 }' 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:13.967 19:41:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:13.967 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:13.967 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:13.967 fio-3.35 00:34:13.967 Starting 2 threads 00:34:24.027 00:34:24.027 filename0: (groupid=0, jobs=1): err= 0: pid=2354164: Thu Oct 17 19:41:47 2024 00:34:24.027 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:34:24.027 slat (nsec): min=5934, max=26659, avg=7734.41, stdev=2697.03 00:34:24.027 clat (usec): min=40820, max=41995, avg=41001.89, stdev=160.71 00:34:24.027 lat (usec): min=40827, max=42012, avg=41009.62, stdev=161.06 00:34:24.027 clat percentiles (usec): 00:34:24.027 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:24.027 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:24.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:24.027 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:24.027 | 99.99th=[42206] 00:34:24.027 bw ( KiB/s): min= 384, max= 416, per=32.40%, avg=388.80, stdev=11.72, samples=20 00:34:24.027 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:24.027 lat (msec) : 50=100.00% 00:34:24.027 cpu : usr=97.11%, sys=2.63%, ctx=10, majf=0, minf=111 00:34:24.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.027 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.027 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:24.027 filename1: (groupid=0, jobs=1): err= 0: pid=2354165: Thu Oct 17 19:41:47 2024 00:34:24.027 read: IOPS=201, BW=808KiB/s (827kB/s)(8096KiB/10020msec) 00:34:24.027 slat (nsec): min=5966, max=24204, avg=7109.91, stdev=2025.35 00:34:24.027 clat (usec): min=380, max=42596, avg=19780.27, stdev=20437.57 00:34:24.027 lat (usec): min=386, max=42603, avg=19787.38, stdev=20436.99 00:34:24.027 clat percentiles (usec): 00:34:24.027 | 1.00th=[ 392], 5.00th=[ 404], 10.00th=[ 408], 20.00th=[ 416], 00:34:24.027 | 30.00th=[ 429], 40.00th=[ 457], 50.00th=[ 586], 60.00th=[40633], 00:34:24.027 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:24.027 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:24.027 | 99.99th=[42730] 00:34:24.027 bw ( KiB/s): min= 736, max= 896, per=67.47%, avg=808.00, stdev=45.11, samples=20 00:34:24.027 iops : min= 184, max= 224, avg=202.00, stdev=11.28, samples=20 00:34:24.027 lat (usec) : 500=42.64%, 750=9.93%, 1000=0.20% 00:34:24.027 lat (msec) : 50=47.23% 00:34:24.027 cpu : usr=96.58%, sys=3.17%, ctx=5, majf=0, minf=173 00:34:24.027 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.027 issued rwts: total=2024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.027 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:24.027 00:34:24.027 Run status group 0 (all jobs): 00:34:24.027 READ: bw=1198KiB/s (1226kB/s), 390KiB/s-808KiB/s (399kB/s-827kB/s), io=11.7MiB (12.3MB), run=10011-10020msec 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 00:34:24.027 real 0m11.469s 00:34:24.027 user 0m26.923s 00:34:24.027 sys 0m0.897s 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 ************************************ 00:34:24.027 END TEST fio_dif_1_multi_subsystems 00:34:24.027 ************************************ 00:34:24.027 19:41:47 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:24.027 19:41:47 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:24.027 19:41:47 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 ************************************ 00:34:24.027 START TEST fio_dif_rand_params 00:34:24.027 ************************************ 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 bdev_null0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:24.027 [2024-10-17 19:41:47.757953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:24.027 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:24.028 { 00:34:24.028 "params": { 00:34:24.028 "name": "Nvme$subsystem", 00:34:24.028 "trtype": "$TEST_TRANSPORT", 00:34:24.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:24.028 "adrfam": "ipv4", 00:34:24.028 "trsvcid": "$NVMF_PORT", 00:34:24.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:24.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:24.028 "hdgst": ${hdgst:-false}, 00:34:24.028 "ddgst": ${ddgst:-false} 00:34:24.028 }, 00:34:24.028 "method": "bdev_nvme_attach_controller" 00:34:24.028 } 00:34:24.028 EOF 00:34:24.028 )") 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:24.028 "params": { 00:34:24.028 "name": "Nvme0", 00:34:24.028 "trtype": "tcp", 00:34:24.028 "traddr": "10.0.0.2", 00:34:24.028 "adrfam": "ipv4", 00:34:24.028 "trsvcid": "4420", 00:34:24.028 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:24.028 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:24.028 "hdgst": false, 00:34:24.028 "ddgst": false 00:34:24.028 }, 00:34:24.028 "method": "bdev_nvme_attach_controller" 00:34:24.028 }' 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:24.028 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:24.303 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:24.303 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:24.303 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:24.303 19:41:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:24.566 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:24.566 ... 00:34:24.566 fio-3.35 00:34:24.566 Starting 3 threads 00:34:31.128 00:34:31.128 filename0: (groupid=0, jobs=1): err= 0: pid=2356136: Thu Oct 17 19:41:53 2024 00:34:31.128 read: IOPS=318, BW=39.8MiB/s (41.8MB/s)(201MiB/5046msec) 00:34:31.128 slat (nsec): min=6284, max=32571, avg=11246.82, stdev=1964.37 00:34:31.128 clat (usec): min=3832, max=49111, avg=9374.23, stdev=4139.12 00:34:31.128 lat (usec): min=3840, max=49123, avg=9385.48, stdev=4139.21 00:34:31.128 clat percentiles (usec): 00:34:31.128 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7963], 00:34:31.128 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:34:31.128 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[10945], 95.00th=[11469], 00:34:31.128 | 99.00th=[45351], 99.50th=[46400], 99.90th=[49021], 99.95th=[49021], 00:34:31.128 | 99.99th=[49021] 00:34:31.128 bw ( KiB/s): min=35840, max=44032, per=33.56%, avg=41113.60, stdev=2245.42, samples=10 00:34:31.128 iops : min= 280, max= 344, avg=321.20, stdev=17.54, samples=10 00:34:31.128 lat (msec) : 4=0.06%, 10=74.94%, 20=23.94%, 50=1.06% 00:34:31.128 cpu : usr=94.39%, sys=5.33%, ctx=7, majf=0, minf=9 00:34:31.128 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 issued rwts: total=1608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.128 filename0: (groupid=0, jobs=1): err= 0: pid=2356137: Thu Oct 17 19:41:53 2024 00:34:31.128 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(187MiB/5005msec) 00:34:31.128 slat (nsec): min=6265, max=34501, avg=11260.24, stdev=2026.48 00:34:31.128 clat (usec): min=3788, max=90344, avg=10040.81, stdev=6238.67 00:34:31.128 lat (usec): min=3795, max=90355, avg=10052.07, stdev=6238.78 00:34:31.128 clat percentiles (usec): 00:34:31.128 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 7570], 20.00th=[ 8291], 00:34:31.128 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:34:31.128 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11731], 00:34:31.128 | 99.00th=[48497], 99.50th=[50070], 99.90th=[90702], 99.95th=[90702], 00:34:31.128 | 99.99th=[90702] 00:34:31.128 bw ( KiB/s): min=30720, max=41216, per=31.16%, avg=38169.60, stdev=3216.63, samples=10 00:34:31.128 iops : min= 240, max= 322, avg=298.20, stdev=25.13, samples=10 00:34:31.128 lat (msec) : 4=0.20%, 10=69.06%, 20=28.94%, 50=1.27%, 100=0.54% 00:34:31.128 cpu : usr=94.66%, sys=5.06%, ctx=9, majf=0, minf=11 00:34:31.128 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 issued rwts: total=1493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.128 filename0: (groupid=0, jobs=1): err= 0: pid=2356138: Thu Oct 17 19:41:53 2024 00:34:31.128 read: IOPS=345, BW=43.1MiB/s (45.2MB/s)(216MiB/5008msec) 00:34:31.128 slat (nsec): min=6270, max=19816, avg=11176.81, stdev=1844.54 00:34:31.128 clat (usec): min=3260, max=51818, avg=8678.11, stdev=3965.22 00:34:31.128 lat (usec): min=3267, max=51838, avg=8689.29, stdev=3965.27 00:34:31.128 clat percentiles (usec): 00:34:31.128 | 1.00th=[ 3752], 5.00th=[ 5932], 10.00th=[ 6521], 20.00th=[ 7373], 00:34:31.128 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8717], 00:34:31.128 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10421], 00:34:31.128 | 99.00th=[12256], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:34:31.128 | 99.99th=[51643] 00:34:31.128 bw ( KiB/s): min=40704, max=45824, per=36.07%, avg=44185.60, stdev=1541.68, samples=10 00:34:31.128 iops : min= 318, max= 358, avg=345.20, stdev=12.04, samples=10 00:34:31.128 lat (msec) : 4=1.68%, 10=89.18%, 20=8.28%, 50=0.69%, 100=0.17% 00:34:31.128 cpu : usr=94.19%, sys=5.51%, ctx=8, majf=0, minf=0 00:34:31.128 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:31.128 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.128 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.128 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:31.128 00:34:31.128 Run status group 0 (all jobs): 00:34:31.128 READ: bw=120MiB/s (125MB/s), 37.3MiB/s-43.1MiB/s (39.1MB/s-45.2MB/s), io=604MiB (633MB), run=5005-5046msec 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.128 bdev_null0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.128 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 [2024-10-17 19:41:53.879384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 bdev_null1 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 bdev_null2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:31.129 { 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme$subsystem", 00:34:31.129 "trtype": "$TEST_TRANSPORT", 00:34:31.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "$NVMF_PORT", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.129 "hdgst": ${hdgst:-false}, 00:34:31.129 "ddgst": ${ddgst:-false} 00:34:31.129 }, 00:34:31.129 "method": "bdev_nvme_attach_controller" 00:34:31.129 } 00:34:31.129 EOF 00:34:31.129 )") 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:31.129 { 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme$subsystem", 00:34:31.129 "trtype": "$TEST_TRANSPORT", 00:34:31.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "$NVMF_PORT", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.129 "hdgst": ${hdgst:-false}, 00:34:31.129 "ddgst": ${ddgst:-false} 00:34:31.129 }, 00:34:31.129 "method": "bdev_nvme_attach_controller" 00:34:31.129 } 00:34:31.129 EOF 00:34:31.129 )") 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:31.129 { 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme$subsystem", 00:34:31.129 "trtype": "$TEST_TRANSPORT", 00:34:31.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "$NVMF_PORT", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.129 "hdgst": ${hdgst:-false}, 00:34:31.129 "ddgst": ${ddgst:-false} 00:34:31.129 }, 00:34:31.129 "method": "bdev_nvme_attach_controller" 00:34:31.129 } 00:34:31.129 EOF 00:34:31.129 )") 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:31.129 19:41:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme0", 00:34:31.129 "trtype": "tcp", 00:34:31.129 "traddr": "10.0.0.2", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "4420", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.129 "hdgst": false, 00:34:31.129 "ddgst": false 00:34:31.129 }, 00:34:31.129 "method": "bdev_nvme_attach_controller" 00:34:31.129 },{ 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme1", 00:34:31.129 "trtype": "tcp", 00:34:31.129 "traddr": "10.0.0.2", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "4420", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.129 "hdgst": false, 00:34:31.129 "ddgst": false 00:34:31.129 }, 00:34:31.129 "method": "bdev_nvme_attach_controller" 00:34:31.129 },{ 00:34:31.129 "params": { 00:34:31.129 "name": "Nvme2", 00:34:31.129 "trtype": "tcp", 00:34:31.129 "traddr": "10.0.0.2", 00:34:31.129 "adrfam": "ipv4", 00:34:31.129 "trsvcid": "4420", 00:34:31.129 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:31.129 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:31.129 "hdgst": false, 00:34:31.129 "ddgst": false 00:34:31.129 }, 00:34:31.130 "method": "bdev_nvme_attach_controller" 00:34:31.130 }' 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:31.130 19:41:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:31.130 19:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:31.130 19:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:31.130 19:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.130 19:41:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.130 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.130 ... 00:34:31.130 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.130 ... 00:34:31.130 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:31.130 ... 00:34:31.130 fio-3.35 00:34:31.130 Starting 24 threads 00:34:43.315 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357317: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:34:43.315 slat (nsec): min=9197, max=91406, avg=32790.70, stdev=13268.54 00:34:43.315 clat (usec): min=16967, max=38657, avg=29716.16, stdev=822.99 00:34:43.315 lat (usec): min=16984, max=38672, avg=29748.96, stdev=823.05 00:34:43.315 clat percentiles (usec): 00:34:43.315 | 1.00th=[28443], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:34:43.315 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:43.315 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.315 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31851], 99.95th=[31851], 00:34:43.315 | 99.99th=[38536] 00:34:43.315 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.315 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.315 lat (msec) : 20=0.26%, 50=99.74% 00:34:43.315 cpu : usr=98.74%, sys=0.91%, ctx=33, majf=0, minf=9 00:34:43.315 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357319: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:34:43.315 slat (nsec): min=7368, max=99948, avg=34278.81, stdev=21733.48 00:34:43.315 clat (usec): min=17259, max=31961, avg=29724.23, stdev=827.52 00:34:43.315 lat (usec): min=17280, max=31976, avg=29758.50, stdev=826.22 00:34:43.315 clat percentiles (usec): 00:34:43.315 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.315 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:43.315 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.315 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31851], 99.95th=[31851], 00:34:43.315 | 99.99th=[31851] 00:34:43.315 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.315 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.315 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.315 cpu : usr=98.40%, sys=1.23%, ctx=13, majf=0, minf=9 00:34:43.315 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357320: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10007msec) 00:34:43.315 slat (nsec): min=7454, max=82619, avg=29240.83, stdev=17690.27 00:34:43.315 clat (usec): min=8421, max=76710, avg=29845.66, stdev=2284.33 00:34:43.315 lat (usec): min=8429, max=76760, avg=29874.90, stdev=2284.50 00:34:43.315 clat percentiles (usec): 00:34:43.315 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:34:43.315 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.315 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.315 | 99.00th=[30540], 99.50th=[31589], 99.90th=[60556], 99.95th=[60556], 00:34:43.315 | 99.99th=[77071] 00:34:43.315 bw ( KiB/s): min= 1904, max= 2176, per=4.14%, avg=2121.26, stdev=72.21, samples=19 00:34:43.315 iops : min= 476, max= 544, avg=530.32, stdev=18.05, samples=19 00:34:43.315 lat (msec) : 10=0.26%, 20=0.30%, 50=99.14%, 100=0.30% 00:34:43.315 cpu : usr=98.60%, sys=1.00%, ctx=12, majf=0, minf=9 00:34:43.315 IO depths : 1=0.1%, 2=6.3%, 4=25.0%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:34:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 issued rwts: total=5326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357321: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:34:43.315 slat (nsec): min=7507, max=98119, avg=30674.95, stdev=21945.76 00:34:43.315 clat (usec): min=16939, max=36706, avg=29744.67, stdev=866.36 00:34:43.315 lat (usec): min=16956, max=36734, avg=29775.35, stdev=863.66 00:34:43.315 clat percentiles (usec): 00:34:43.315 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.315 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.315 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:43.315 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:34:43.315 | 99.99th=[36963] 00:34:43.315 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.315 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.315 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.315 cpu : usr=98.61%, sys=1.01%, ctx=12, majf=0, minf=9 00:34:43.315 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357322: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:43.315 slat (nsec): min=7964, max=42258, avg=19641.84, stdev=6144.77 00:34:43.315 clat (usec): min=17197, max=31673, avg=29814.08, stdev=968.85 00:34:43.315 lat (usec): min=17206, max=31685, avg=29833.72, stdev=968.84 00:34:43.315 clat percentiles (usec): 00:34:43.315 | 1.00th=[28967], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:34:43.315 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:34:43.315 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.315 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:34:43.315 | 99.99th=[31589] 00:34:43.315 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.315 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.315 lat (msec) : 20=0.34%, 50=99.66% 00:34:43.315 cpu : usr=98.38%, sys=1.24%, ctx=12, majf=0, minf=9 00:34:43.315 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.315 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.315 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.315 filename0: (groupid=0, jobs=1): err= 0: pid=2357323: Thu Oct 17 19:42:05 2024 00:34:43.315 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:43.315 slat (nsec): min=4746, max=95324, avg=36820.37, stdev=21524.31 00:34:43.315 clat (usec): min=10809, max=62183, avg=29680.19, stdev=1760.40 00:34:43.315 lat (usec): min=10824, max=62196, avg=29717.01, stdev=1760.70 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[31065], 99.90th=[52167], 99.95th=[52167], 00:34:43.316 | 99.99th=[62129] 00:34:43.316 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2124.80, stdev=76.58, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.316 lat (msec) : 20=0.34%, 50=99.36%, 100=0.30% 00:34:43.316 cpu : usr=98.59%, sys=1.03%, ctx=13, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename0: (groupid=0, jobs=1): err= 0: pid=2357324: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:43.316 slat (nsec): min=7629, max=44377, avg=20797.27, stdev=5856.35 00:34:43.316 clat (usec): min=15346, max=41179, avg=29865.35, stdev=1062.17 00:34:43.316 lat (usec): min=15360, max=41196, avg=29886.14, stdev=1061.95 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:34:43.316 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.316 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.316 | 99.00th=[30802], 99.50th=[31589], 99.90th=[41157], 99.95th=[41157], 00:34:43.316 | 99.99th=[41157] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2124.80, stdev=64.34, samples=20 00:34:43.316 iops : min= 512, max= 544, avg=531.20, stdev=16.08, samples=20 00:34:43.316 lat (msec) : 20=0.34%, 50=99.66% 00:34:43.316 cpu : usr=98.41%, sys=1.22%, ctx=11, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename0: (groupid=0, jobs=1): err= 0: pid=2357325: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=530, BW=2123KiB/s (2174kB/s)(20.8MiB/10007msec) 00:34:43.316 slat (nsec): min=8833, max=95204, avg=38252.44, stdev=20957.74 00:34:43.316 clat (usec): min=22120, max=58601, avg=29761.07, stdev=1651.64 00:34:43.316 lat (usec): min=22137, max=58619, avg=29799.32, stdev=1650.77 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[30802], 99.90th=[58459], 99.95th=[58459], 00:34:43.316 | 99.99th=[58459] 00:34:43.316 bw ( KiB/s): min= 1923, max= 2176, per=4.14%, avg=2118.55, stdev=77.01, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=529.60, stdev=19.35, samples=20 00:34:43.316 lat (msec) : 50=99.70%, 100=0.30% 00:34:43.316 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357326: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2131KiB/s (2182kB/s)(20.8MiB/10003msec) 00:34:43.316 slat (nsec): min=7604, max=97198, avg=33911.53, stdev=21505.63 00:34:43.316 clat (usec): min=18881, max=40764, avg=29763.23, stdev=721.20 00:34:43.316 lat (usec): min=18892, max=40783, avg=29797.15, stdev=718.71 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.316 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.316 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31327], 99.95th=[31589], 00:34:43.316 | 99.99th=[40633] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2128.84, stdev=63.44, samples=19 00:34:43.316 iops : min= 512, max= 544, avg=532.21, stdev=15.86, samples=19 00:34:43.316 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.316 cpu : usr=98.64%, sys=0.98%, ctx=12, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357327: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2128KiB/s (2179kB/s)(20.8MiB/10015msec) 00:34:43.316 slat (nsec): min=7506, max=98577, avg=38348.94, stdev=21995.84 00:34:43.316 clat (usec): min=12921, max=49905, avg=29678.21, stdev=1472.32 00:34:43.316 lat (usec): min=12930, max=49922, avg=29716.56, stdev=1473.17 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[31589], 99.90th=[50070], 99.95th=[50070], 00:34:43.316 | 99.99th=[50070] 00:34:43.316 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2124.95, stdev=76.15, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.316 cpu : usr=98.36%, sys=1.27%, ctx=9, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357329: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:43.316 slat (nsec): min=7632, max=41402, avg=17113.00, stdev=6452.27 00:34:43.316 clat (usec): min=17420, max=31637, avg=29837.80, stdev=968.85 00:34:43.316 lat (usec): min=17436, max=31649, avg=29854.92, stdev=968.47 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[28443], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:34:43.316 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:34:43.316 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.316 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:34:43.316 | 99.99th=[31589] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.316 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.316 cpu : usr=98.42%, sys=1.20%, ctx=10, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357330: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:43.316 slat (nsec): min=4796, max=95090, avg=37433.95, stdev=21306.54 00:34:43.316 clat (usec): min=10827, max=50943, avg=29674.74, stdev=1624.56 00:34:43.316 lat (usec): min=10842, max=50959, avg=29712.18, stdev=1625.00 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30540], 99.50th=[31065], 99.90th=[51119], 99.95th=[51119], 00:34:43.316 | 99.99th=[51119] 00:34:43.316 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2124.95, stdev=76.15, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:43.316 cpu : usr=98.56%, sys=1.06%, ctx=13, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357331: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:34:43.316 slat (nsec): min=5634, max=99364, avg=40151.71, stdev=21745.06 00:34:43.316 clat (usec): min=16954, max=31903, avg=29634.02, stdev=829.67 00:34:43.316 lat (usec): min=16970, max=31928, avg=29674.17, stdev=830.90 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[28181], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[31851], 00:34:43.316 | 99.99th=[31851] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.316 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.316 cpu : usr=98.67%, sys=0.95%, ctx=12, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357332: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=548, BW=2193KiB/s (2245kB/s)(21.4MiB/10015msec) 00:34:43.316 slat (nsec): min=7447, max=98552, avg=24970.16, stdev=21976.06 00:34:43.316 clat (usec): min=3365, max=36930, avg=29001.15, stdev=3693.24 00:34:43.316 lat (usec): min=3380, max=36961, avg=29026.12, stdev=3695.43 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[ 5866], 5.00th=[21365], 10.00th=[29230], 20.00th=[29492], 00:34:43.316 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.316 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:43.316 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33817], 99.95th=[33817], 00:34:43.316 | 99.99th=[36963] 00:34:43.316 bw ( KiB/s): min= 2048, max= 3350, per=4.28%, avg=2189.90, stdev=279.95, samples=20 00:34:43.316 iops : min= 512, max= 837, avg=547.45, stdev=69.88, samples=20 00:34:43.316 lat (msec) : 4=0.42%, 10=1.04%, 20=2.19%, 50=96.36% 00:34:43.316 cpu : usr=98.34%, sys=1.27%, ctx=14, majf=0, minf=9 00:34:43.316 IO depths : 1=5.8%, 2=11.7%, 4=23.7%, 8=52.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357333: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=531, BW=2125KiB/s (2176kB/s)(20.8MiB/10001msec) 00:34:43.316 slat (nsec): min=6323, max=99180, avg=39511.63, stdev=22100.61 00:34:43.316 clat (usec): min=25279, max=50707, avg=29719.78, stdev=1208.30 00:34:43.316 lat (usec): min=25289, max=50724, avg=29759.29, stdev=1207.97 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[31327], 99.90th=[50594], 99.95th=[50594], 00:34:43.316 | 99.99th=[50594] 00:34:43.316 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2122.11, stdev=77.69, samples=19 00:34:43.316 iops : min= 480, max= 544, avg=530.53, stdev=19.42, samples=19 00:34:43.316 lat (msec) : 50=99.70%, 100=0.30% 00:34:43.316 cpu : usr=98.52%, sys=1.11%, ctx=9, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename1: (groupid=0, jobs=1): err= 0: pid=2357334: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=531, BW=2128KiB/s (2179kB/s)(20.8MiB/10016msec) 00:34:43.316 slat (nsec): min=7929, max=99670, avg=39119.93, stdev=21846.90 00:34:43.316 clat (usec): min=21927, max=43891, avg=29684.04, stdev=674.96 00:34:43.316 lat (usec): min=21949, max=43916, avg=29723.16, stdev=676.06 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[31327], 99.90th=[36963], 99.95th=[43779], 00:34:43.316 | 99.99th=[43779] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2122.11, stdev=64.93, samples=19 00:34:43.316 iops : min= 512, max= 544, avg=530.53, stdev=16.23, samples=19 00:34:43.316 lat (msec) : 50=100.00% 00:34:43.316 cpu : usr=98.48%, sys=1.15%, ctx=11, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename2: (groupid=0, jobs=1): err= 0: pid=2357335: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10013msec) 00:34:43.316 slat (nsec): min=7853, max=41855, avg=20821.24, stdev=5826.73 00:34:43.316 clat (usec): min=17490, max=31703, avg=29797.21, stdev=963.65 00:34:43.316 lat (usec): min=17499, max=31717, avg=29818.03, stdev=964.01 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[28443], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:34:43.316 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.316 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.316 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:34:43.316 | 99.99th=[31589] 00:34:43.316 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.316 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.316 cpu : usr=98.52%, sys=1.09%, ctx=12, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename2: (groupid=0, jobs=1): err= 0: pid=2357336: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:43.316 slat (nsec): min=5765, max=95186, avg=35930.60, stdev=21751.24 00:34:43.316 clat (usec): min=10816, max=51778, avg=29682.54, stdev=1656.22 00:34:43.316 lat (usec): min=10837, max=51793, avg=29718.47, stdev=1656.66 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30540], 99.50th=[31065], 99.90th=[51643], 99.95th=[51643], 00:34:43.316 | 99.99th=[51643] 00:34:43.316 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2124.80, stdev=76.58, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.316 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:34:43.316 cpu : usr=98.55%, sys=1.07%, ctx=9, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename2: (groupid=0, jobs=1): err= 0: pid=2357337: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:34:43.316 slat (nsec): min=5887, max=95144, avg=37749.86, stdev=21207.86 00:34:43.316 clat (usec): min=10789, max=50908, avg=29678.51, stdev=1649.04 00:34:43.316 lat (usec): min=10818, max=50921, avg=29716.26, stdev=1649.28 00:34:43.316 clat percentiles (usec): 00:34:43.316 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.316 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.316 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.316 | 99.00th=[30802], 99.50th=[31065], 99.90th=[51119], 99.95th=[51119], 00:34:43.316 | 99.99th=[51119] 00:34:43.316 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2124.95, stdev=76.15, samples=20 00:34:43.316 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.316 lat (msec) : 20=0.34%, 50=99.36%, 100=0.30% 00:34:43.316 cpu : usr=98.48%, sys=1.14%, ctx=12, majf=0, minf=9 00:34:43.316 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.316 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.316 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.316 filename2: (groupid=0, jobs=1): err= 0: pid=2357338: Thu Oct 17 19:42:05 2024 00:34:43.316 read: IOPS=539, BW=2157KiB/s (2209kB/s)(21.1MiB/10028msec) 00:34:43.316 slat (nsec): min=7492, max=90797, avg=20750.91, stdev=17790.55 00:34:43.316 clat (usec): min=3723, max=31507, avg=29502.72, stdev=2869.96 00:34:43.317 lat (usec): min=3756, max=31525, avg=29523.47, stdev=2870.03 00:34:43.317 clat percentiles (usec): 00:34:43.317 | 1.00th=[ 9765], 5.00th=[29230], 10.00th=[29492], 20.00th=[29754], 00:34:43.317 | 30.00th=[29754], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:34:43.317 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:43.317 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31589], 00:34:43.317 | 99.99th=[31589] 00:34:43.317 bw ( KiB/s): min= 2048, max= 2688, per=4.21%, avg=2156.80, stdev=139.45, samples=20 00:34:43.317 iops : min= 512, max= 672, avg=539.20, stdev=34.86, samples=20 00:34:43.317 lat (msec) : 4=0.43%, 10=0.59%, 20=1.02%, 50=97.97% 00:34:43.317 cpu : usr=98.36%, sys=1.26%, ctx=37, majf=0, minf=9 00:34:43.317 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:43.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.317 filename2: (groupid=0, jobs=1): err= 0: pid=2357339: Thu Oct 17 19:42:05 2024 00:34:43.317 read: IOPS=539, BW=2157KiB/s (2209kB/s)(21.1MiB/10024msec) 00:34:43.317 slat (nsec): min=7021, max=87665, avg=21102.18, stdev=15276.96 00:34:43.317 clat (usec): min=3720, max=40759, avg=29521.94, stdev=3035.31 00:34:43.317 lat (usec): min=3732, max=40785, avg=29543.05, stdev=3035.63 00:34:43.317 clat percentiles (usec): 00:34:43.317 | 1.00th=[ 7046], 5.00th=[29492], 10.00th=[29754], 20.00th=[29754], 00:34:43.317 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30016], 60.00th=[30016], 00:34:43.317 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30278], 95.00th=[30278], 00:34:43.317 | 99.00th=[30802], 99.50th=[30802], 99.90th=[40109], 99.95th=[40633], 00:34:43.317 | 99.99th=[40633] 00:34:43.317 bw ( KiB/s): min= 2048, max= 2672, per=4.21%, avg=2156.00, stdev=132.03, samples=20 00:34:43.317 iops : min= 512, max= 668, avg=539.00, stdev=33.01, samples=20 00:34:43.317 lat (msec) : 4=0.30%, 10=1.18%, 20=0.59%, 50=97.93% 00:34:43.317 cpu : usr=98.38%, sys=1.20%, ctx=25, majf=0, minf=11 00:34:43.317 IO depths : 1=0.1%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:43.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 issued rwts: total=5406,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.317 filename2: (groupid=0, jobs=1): err= 0: pid=2357340: Thu Oct 17 19:42:05 2024 00:34:43.317 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:43.317 slat (nsec): min=7715, max=43220, avg=21249.03, stdev=6093.54 00:34:43.317 clat (usec): min=15328, max=41160, avg=29861.95, stdev=1063.04 00:34:43.317 lat (usec): min=15350, max=41178, avg=29883.20, stdev=1062.90 00:34:43.317 clat percentiles (usec): 00:34:43.317 | 1.00th=[29492], 5.00th=[29754], 10.00th=[29754], 20.00th=[29754], 00:34:43.317 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:34:43.317 | 70.00th=[30016], 80.00th=[30016], 90.00th=[30016], 95.00th=[30278], 00:34:43.317 | 99.00th=[30802], 99.50th=[31589], 99.90th=[41157], 99.95th=[41157], 00:34:43.317 | 99.99th=[41157] 00:34:43.317 bw ( KiB/s): min= 2048, max= 2176, per=4.15%, avg=2124.80, stdev=64.34, samples=20 00:34:43.317 iops : min= 512, max= 544, avg=531.20, stdev=16.08, samples=20 00:34:43.317 lat (msec) : 20=0.34%, 50=99.66% 00:34:43.317 cpu : usr=98.44%, sys=1.19%, ctx=13, majf=0, minf=9 00:34:43.317 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.317 filename2: (groupid=0, jobs=1): err= 0: pid=2357342: Thu Oct 17 19:42:05 2024 00:34:43.317 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:34:43.317 slat (usec): min=8, max=103, avg=40.18, stdev=21.45 00:34:43.317 clat (usec): min=16984, max=31899, avg=29627.74, stdev=824.59 00:34:43.317 lat (usec): min=17001, max=31955, avg=29667.92, stdev=826.05 00:34:43.317 clat percentiles (usec): 00:34:43.317 | 1.00th=[28443], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:34:43.317 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:34:43.317 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.317 | 99.00th=[30802], 99.50th=[30802], 99.90th=[31589], 99.95th=[31851], 00:34:43.317 | 99.99th=[31851] 00:34:43.317 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2131.20, stdev=62.64, samples=20 00:34:43.317 iops : min= 512, max= 544, avg=532.80, stdev=15.66, samples=20 00:34:43.317 lat (msec) : 20=0.30%, 50=99.70% 00:34:43.317 cpu : usr=98.59%, sys=1.02%, ctx=14, majf=0, minf=9 00:34:43.317 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:43.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.317 filename2: (groupid=0, jobs=1): err= 0: pid=2357343: Thu Oct 17 19:42:05 2024 00:34:43.317 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:34:43.317 slat (nsec): min=8191, max=81458, avg=39531.34, stdev=16867.11 00:34:43.317 clat (usec): min=8586, max=52260, avg=29706.30, stdev=1676.25 00:34:43.317 lat (usec): min=8598, max=52295, avg=29745.84, stdev=1676.07 00:34:43.317 clat percentiles (usec): 00:34:43.317 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:34:43.317 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:34:43.317 | 70.00th=[29754], 80.00th=[30016], 90.00th=[30016], 95.00th=[30016], 00:34:43.317 | 99.00th=[30540], 99.50th=[31065], 99.90th=[52167], 99.95th=[52167], 00:34:43.317 | 99.99th=[52167] 00:34:43.317 bw ( KiB/s): min= 1920, max= 2176, per=4.15%, avg=2124.80, stdev=76.58, samples=20 00:34:43.317 iops : min= 480, max= 544, avg=531.20, stdev=19.14, samples=20 00:34:43.317 lat (msec) : 10=0.04%, 20=0.26%, 50=99.40%, 100=0.30% 00:34:43.317 cpu : usr=98.37%, sys=1.14%, ctx=67, majf=0, minf=9 00:34:43.317 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:43.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:43.317 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:43.317 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:43.317 00:34:43.317 Run status group 0 (all jobs): 00:34:43.317 READ: bw=50.0MiB/s (52.4MB/s), 2123KiB/s-2193KiB/s (2174kB/s-2245kB/s), io=501MiB (525MB), run=10001-10028msec 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 bdev_null0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 [2024-10-17 19:42:05.542704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 bdev_null1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config=() 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # local subsystem config 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.317 { 00:34:43.317 "params": { 00:34:43.317 "name": "Nvme$subsystem", 00:34:43.317 "trtype": "$TEST_TRANSPORT", 00:34:43.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.317 "adrfam": "ipv4", 00:34:43.317 "trsvcid": "$NVMF_PORT", 00:34:43.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.317 "hdgst": ${hdgst:-false}, 00:34:43.317 "ddgst": ${ddgst:-false} 00:34:43.317 }, 00:34:43.317 "method": "bdev_nvme_attach_controller" 00:34:43.317 } 00:34:43.317 EOF 00:34:43.317 )") 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:43.317 { 00:34:43.317 "params": { 00:34:43.317 "name": "Nvme$subsystem", 00:34:43.317 "trtype": "$TEST_TRANSPORT", 00:34:43.317 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:43.317 "adrfam": "ipv4", 00:34:43.317 "trsvcid": "$NVMF_PORT", 00:34:43.317 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:43.317 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:43.317 "hdgst": ${hdgst:-false}, 00:34:43.317 "ddgst": ${ddgst:-false} 00:34:43.317 }, 00:34:43.317 "method": "bdev_nvme_attach_controller" 00:34:43.317 } 00:34:43.317 EOF 00:34:43.317 )") 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # cat 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # jq . 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@583 -- # IFS=, 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:43.317 "params": { 00:34:43.317 "name": "Nvme0", 00:34:43.317 "trtype": "tcp", 00:34:43.317 "traddr": "10.0.0.2", 00:34:43.317 "adrfam": "ipv4", 00:34:43.317 "trsvcid": "4420", 00:34:43.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:43.317 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:43.317 "hdgst": false, 00:34:43.317 "ddgst": false 00:34:43.317 }, 00:34:43.317 "method": "bdev_nvme_attach_controller" 00:34:43.317 },{ 00:34:43.317 "params": { 00:34:43.317 "name": "Nvme1", 00:34:43.317 "trtype": "tcp", 00:34:43.317 "traddr": "10.0.0.2", 00:34:43.317 "adrfam": "ipv4", 00:34:43.317 "trsvcid": "4420", 00:34:43.317 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:43.317 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:43.317 "hdgst": false, 00:34:43.317 "ddgst": false 00:34:43.317 }, 00:34:43.317 "method": "bdev_nvme_attach_controller" 00:34:43.317 }' 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:43.317 19:42:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.317 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:43.317 ... 00:34:43.317 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:43.317 ... 00:34:43.317 fio-3.35 00:34:43.317 Starting 4 threads 00:34:48.598 00:34:48.598 filename0: (groupid=0, jobs=1): err= 0: pid=2359398: Thu Oct 17 19:42:11 2024 00:34:48.598 read: IOPS=2937, BW=23.0MiB/s (24.1MB/s)(115MiB/5002msec) 00:34:48.598 slat (nsec): min=6127, max=26799, avg=8938.73, stdev=2925.15 00:34:48.598 clat (usec): min=578, max=5434, avg=2695.44, stdev=431.82 00:34:48.598 lat (usec): min=593, max=5446, avg=2704.38, stdev=431.68 00:34:48.598 clat percentiles (usec): 00:34:48.598 | 1.00th=[ 1631], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2409], 00:34:48.598 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:34:48.598 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3130], 95.00th=[ 3392], 00:34:48.598 | 99.00th=[ 4146], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5080], 00:34:48.598 | 99.99th=[ 5407] 00:34:48.598 bw ( KiB/s): min=22752, max=24320, per=27.48%, avg=23617.78, stdev=475.00, samples=9 00:34:48.598 iops : min= 2844, max= 3040, avg=2952.22, stdev=59.38, samples=9 00:34:48.598 lat (usec) : 750=0.02%, 1000=0.29% 00:34:48.598 lat (msec) : 2=2.96%, 4=95.47%, 10=1.26% 00:34:48.598 cpu : usr=95.52%, sys=4.16%, ctx=10, majf=0, minf=9 00:34:48.598 IO depths : 1=0.4%, 2=8.8%, 4=62.1%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 issued rwts: total=14695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.598 filename0: (groupid=0, jobs=1): err= 0: pid=2359399: Thu Oct 17 19:42:11 2024 00:34:48.598 read: IOPS=2559, BW=20.0MiB/s (21.0MB/s)(100MiB/5001msec) 00:34:48.598 slat (nsec): min=6152, max=50598, avg=8964.90, stdev=3124.18 00:34:48.598 clat (usec): min=795, max=5520, avg=3099.84, stdev=479.60 00:34:48.598 lat (usec): min=806, max=5533, avg=3108.80, stdev=479.34 00:34:48.598 clat percentiles (usec): 00:34:48.598 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2638], 20.00th=[ 2802], 00:34:48.598 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2999], 60.00th=[ 3064], 00:34:48.598 | 70.00th=[ 3228], 80.00th=[ 3425], 90.00th=[ 3687], 95.00th=[ 4047], 00:34:48.598 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5407], 00:34:48.598 | 99.99th=[ 5473] 00:34:48.598 bw ( KiB/s): min=19440, max=21168, per=23.80%, avg=20451.56, stdev=614.64, samples=9 00:34:48.598 iops : min= 2430, max= 2646, avg=2556.44, stdev=76.83, samples=9 00:34:48.598 lat (usec) : 1000=0.04% 00:34:48.598 lat (msec) : 2=0.60%, 4=93.94%, 10=5.42% 00:34:48.598 cpu : usr=95.90%, sys=3.74%, ctx=12, majf=0, minf=9 00:34:48.598 IO depths : 1=0.1%, 2=2.7%, 4=68.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 issued rwts: total=12801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.598 filename1: (groupid=0, jobs=1): err= 0: pid=2359400: Thu Oct 17 19:42:11 2024 00:34:48.598 read: IOPS=2598, BW=20.3MiB/s (21.3MB/s)(102MiB/5002msec) 00:34:48.598 slat (nsec): min=6168, max=47005, avg=8944.76, stdev=3094.45 00:34:48.598 clat (usec): min=941, max=5479, avg=3052.09, stdev=525.26 00:34:48.598 lat (usec): min=952, max=5486, avg=3061.03, stdev=524.91 00:34:48.598 clat percentiles (usec): 00:34:48.598 | 1.00th=[ 2040], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2671], 00:34:48.598 | 30.00th=[ 2835], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:34:48.598 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 4080], 00:34:48.598 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5276], 99.95th=[ 5407], 00:34:48.598 | 99.99th=[ 5473] 00:34:48.598 bw ( KiB/s): min=19440, max=21664, per=24.09%, avg=20703.33, stdev=764.98, samples=9 00:34:48.598 iops : min= 2430, max= 2708, avg=2587.89, stdev=95.61, samples=9 00:34:48.598 lat (usec) : 1000=0.01% 00:34:48.598 lat (msec) : 2=0.82%, 4=93.72%, 10=5.45% 00:34:48.598 cpu : usr=96.18%, sys=3.46%, ctx=7, majf=0, minf=9 00:34:48.598 IO depths : 1=0.1%, 2=4.9%, 4=66.9%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 issued rwts: total=12996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.598 filename1: (groupid=0, jobs=1): err= 0: pid=2359401: Thu Oct 17 19:42:11 2024 00:34:48.598 read: IOPS=2647, BW=20.7MiB/s (21.7MB/s)(103MiB/5001msec) 00:34:48.598 slat (nsec): min=6182, max=52936, avg=9009.30, stdev=3071.62 00:34:48.598 clat (usec): min=1044, max=5403, avg=2996.07, stdev=486.81 00:34:48.598 lat (usec): min=1051, max=5409, avg=3005.08, stdev=486.61 00:34:48.598 clat percentiles (usec): 00:34:48.598 | 1.00th=[ 1926], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2638], 00:34:48.598 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2999], 00:34:48.598 | 70.00th=[ 3097], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3916], 00:34:48.598 | 99.00th=[ 4490], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5145], 00:34:48.598 | 99.99th=[ 5407] 00:34:48.598 bw ( KiB/s): min=20080, max=22288, per=24.68%, avg=21208.89, stdev=701.27, samples=9 00:34:48.598 iops : min= 2510, max= 2786, avg=2651.11, stdev=87.66, samples=9 00:34:48.598 lat (msec) : 2=1.24%, 4=94.46%, 10=4.31% 00:34:48.598 cpu : usr=95.52%, sys=4.16%, ctx=10, majf=0, minf=9 00:34:48.598 IO depths : 1=0.2%, 2=4.0%, 4=67.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.598 issued rwts: total=13238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.598 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:48.598 00:34:48.598 Run status group 0 (all jobs): 00:34:48.598 READ: bw=83.9MiB/s (88.0MB/s), 20.0MiB/s-23.0MiB/s (21.0MB/s-24.1MB/s), io=420MiB (440MB), run=5001-5002msec 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.598 00:34:48.598 real 0m24.194s 00:34:48.598 user 4m51.929s 00:34:48.598 sys 0m5.247s 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 ************************************ 00:34:48.598 END TEST fio_dif_rand_params 00:34:48.598 ************************************ 00:34:48.598 19:42:11 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:48.598 19:42:11 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:34:48.598 19:42:11 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:48.598 19:42:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:48.598 ************************************ 00:34:48.598 START TEST fio_dif_digest 00:34:48.598 ************************************ 00:34:48.598 19:42:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:34:48.598 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:48.598 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:48.598 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:48.598 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.599 19:42:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.599 bdev_null0 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:48.599 [2024-10-17 19:42:12.029151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config=() 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # local subsystem config 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:34:48.599 { 00:34:48.599 "params": { 00:34:48.599 "name": "Nvme$subsystem", 00:34:48.599 "trtype": "$TEST_TRANSPORT", 00:34:48.599 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:48.599 "adrfam": "ipv4", 00:34:48.599 "trsvcid": "$NVMF_PORT", 00:34:48.599 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:48.599 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:48.599 "hdgst": ${hdgst:-false}, 00:34:48.599 "ddgst": ${ddgst:-false} 00:34:48.599 }, 00:34:48.599 "method": "bdev_nvme_attach_controller" 00:34:48.599 } 00:34:48.599 EOF 00:34:48.599 )") 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # cat 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # jq . 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@583 -- # IFS=, 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:34:48.599 "params": { 00:34:48.599 "name": "Nvme0", 00:34:48.599 "trtype": "tcp", 00:34:48.599 "traddr": "10.0.0.2", 00:34:48.599 "adrfam": "ipv4", 00:34:48.599 "trsvcid": "4420", 00:34:48.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:48.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:48.599 "hdgst": true, 00:34:48.599 "ddgst": true 00:34:48.599 }, 00:34:48.599 "method": "bdev_nvme_attach_controller" 00:34:48.599 }' 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:48.599 19:42:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:48.857 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:48.857 ... 00:34:48.857 fio-3.35 00:34:48.857 Starting 3 threads 00:35:01.087 00:35:01.087 filename0: (groupid=0, jobs=1): err= 0: pid=2360961: Thu Oct 17 19:42:22 2024 00:35:01.087 read: IOPS=300, BW=37.6MiB/s (39.4MB/s)(378MiB/10048msec) 00:35:01.087 slat (nsec): min=6406, max=42840, avg=11601.55, stdev=1911.29 00:35:01.087 clat (usec): min=6100, max=50404, avg=9950.32, stdev=1231.46 00:35:01.087 lat (usec): min=6112, max=50416, avg=9961.92, stdev=1231.43 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9372], 00:35:01.087 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:35:01.087 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[10945], 00:35:01.087 | 99.00th=[11600], 99.50th=[11863], 99.90th=[12256], 99.95th=[47449], 00:35:01.087 | 99.99th=[50594] 00:35:01.087 bw ( KiB/s): min=37707, max=39759, per=35.59%, avg=38645.70, stdev=548.63, samples=20 00:35:01.087 iops : min= 294, max= 310, avg=301.80, stdev= 4.30, samples=20 00:35:01.087 lat (msec) : 10=52.37%, 20=47.57%, 50=0.03%, 100=0.03% 00:35:01.087 cpu : usr=94.52%, sys=5.19%, ctx=13, majf=0, minf=72 00:35:01.087 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=3021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.087 filename0: (groupid=0, jobs=1): err= 0: pid=2360962: Thu Oct 17 19:42:22 2024 00:35:01.087 read: IOPS=278, BW=34.8MiB/s (36.4MB/s)(349MiB/10045msec) 00:35:01.087 slat (nsec): min=6523, max=37287, avg=11902.52, stdev=1770.63 00:35:01.087 clat (usec): min=6778, max=47521, avg=10761.13, stdev=1221.32 00:35:01.087 lat (usec): min=6794, max=47533, avg=10773.03, stdev=1221.32 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10159], 00:35:01.087 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:35:01.087 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:35:01.087 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13698], 99.95th=[45876], 00:35:01.087 | 99.99th=[47449] 00:35:01.087 bw ( KiB/s): min=34560, max=36608, per=32.90%, avg=35724.80, stdev=501.62, samples=20 00:35:01.087 iops : min= 270, max= 286, avg=279.10, stdev= 3.92, samples=20 00:35:01.087 lat (msec) : 10=15.11%, 20=84.82%, 50=0.07% 00:35:01.087 cpu : usr=94.64%, sys=5.06%, ctx=13, majf=0, minf=65 00:35:01.087 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.087 issued rwts: total=2793,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.087 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.087 filename0: (groupid=0, jobs=1): err= 0: pid=2360963: Thu Oct 17 19:42:22 2024 00:35:01.087 read: IOPS=269, BW=33.7MiB/s (35.4MB/s)(339MiB/10046msec) 00:35:01.087 slat (nsec): min=6411, max=27381, avg=11575.42, stdev=1744.00 00:35:01.087 clat (usec): min=8636, max=50918, avg=11087.59, stdev=1797.30 00:35:01.087 lat (usec): min=8644, max=50945, avg=11099.17, stdev=1797.54 00:35:01.087 clat percentiles (usec): 00:35:01.087 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:35:01.087 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:35:01.087 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:35:01.087 | 99.00th=[12911], 99.50th=[13173], 99.90th=[51119], 99.95th=[51119], 00:35:01.087 | 99.99th=[51119] 00:35:01.088 bw ( KiB/s): min=32000, max=35328, per=31.93%, avg=34675.20, stdev=711.95, samples=20 00:35:01.088 iops : min= 250, max= 276, avg=270.90, stdev= 5.56, samples=20 00:35:01.088 lat (msec) : 10=7.12%, 20=92.70%, 50=0.07%, 100=0.11% 00:35:01.088 cpu : usr=94.97%, sys=4.72%, ctx=16, majf=0, minf=114 00:35:01.088 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.088 issued rwts: total=2711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.088 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:01.088 00:35:01.088 Run status group 0 (all jobs): 00:35:01.088 READ: bw=106MiB/s (111MB/s), 33.7MiB/s-37.6MiB/s (35.4MB/s-39.4MB/s), io=1066MiB (1117MB), run=10045-10048msec 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:01.088 00:35:01.088 real 0m11.169s 00:35:01.088 user 0m35.180s 00:35:01.088 sys 0m1.804s 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:01.088 19:42:23 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.088 ************************************ 00:35:01.088 END TEST fio_dif_digest 00:35:01.088 ************************************ 00:35:01.088 19:42:23 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:01.088 19:42:23 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:01.088 rmmod nvme_tcp 00:35:01.088 rmmod nvme_fabrics 00:35:01.088 rmmod nvme_keyring 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@515 -- # '[' -n 2351942 ']' 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@516 -- # killprocess 2351942 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2351942 ']' 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2351942 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2351942 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2351942' 00:35:01.088 killing process with pid 2351942 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2351942 00:35:01.088 19:42:23 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2351942 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:01.088 19:42:23 nvmf_dif -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:02.466 Waiting for block devices as requested 00:35:02.466 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:02.725 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:02.725 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:02.725 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:02.984 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:02.984 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:02.984 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:02.984 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:03.244 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:03.244 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:03.244 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:03.503 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:03.503 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:03.503 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:03.762 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:03.762 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:03.762 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@789 -- # iptables-save 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@789 -- # iptables-restore 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:04.020 19:42:27 nvmf_dif -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:04.020 19:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:04.020 19:42:27 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:05.934 19:42:29 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:05.934 00:35:05.934 real 1m13.978s 00:35:05.934 user 7m9.675s 00:35:05.934 sys 0m20.880s 00:35:05.934 19:42:29 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:05.934 19:42:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:05.934 ************************************ 00:35:05.934 END TEST nvmf_dif 00:35:05.934 ************************************ 00:35:05.934 19:42:29 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:05.934 19:42:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:05.934 19:42:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:05.934 19:42:29 -- common/autotest_common.sh@10 -- # set +x 00:35:05.934 ************************************ 00:35:05.935 START TEST nvmf_abort_qd_sizes 00:35:05.935 ************************************ 00:35:05.935 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:06.194 * Looking for test storage... 00:35:06.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:06.194 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:06.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.195 --rc genhtml_branch_coverage=1 00:35:06.195 --rc genhtml_function_coverage=1 00:35:06.195 --rc genhtml_legend=1 00:35:06.195 --rc geninfo_all_blocks=1 00:35:06.195 --rc geninfo_unexecuted_blocks=1 00:35:06.195 00:35:06.195 ' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:06.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.195 --rc genhtml_branch_coverage=1 00:35:06.195 --rc genhtml_function_coverage=1 00:35:06.195 --rc genhtml_legend=1 00:35:06.195 --rc geninfo_all_blocks=1 00:35:06.195 --rc geninfo_unexecuted_blocks=1 00:35:06.195 00:35:06.195 ' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:06.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.195 --rc genhtml_branch_coverage=1 00:35:06.195 --rc genhtml_function_coverage=1 00:35:06.195 --rc genhtml_legend=1 00:35:06.195 --rc geninfo_all_blocks=1 00:35:06.195 --rc geninfo_unexecuted_blocks=1 00:35:06.195 00:35:06.195 ' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:06.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:06.195 --rc genhtml_branch_coverage=1 00:35:06.195 --rc genhtml_function_coverage=1 00:35:06.195 --rc genhtml_legend=1 00:35:06.195 --rc geninfo_all_blocks=1 00:35:06.195 --rc geninfo_unexecuted_blocks=1 00:35:06.195 00:35:06.195 ' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:06.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # prepare_net_devs 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # local -g is_hw=no 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # remove_spdk_ns 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:06.195 19:42:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:12.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:12.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:12.767 Found net devices under 0000:86:00.0: cvl_0_0 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ up == up ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:12.767 Found net devices under 0000:86:00.1: cvl_0_1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # is_hw=yes 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:12.767 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:12.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:12.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.445 ms 00:35:12.768 00:35:12.768 --- 10.0.0.2 ping statistics --- 00:35:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.768 rtt min/avg/max/mdev = 0.445/0.445/0.445/0.000 ms 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:12.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:12.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:35:12.768 00:35:12.768 --- 10.0.0.1 ping statistics --- 00:35:12.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:12.768 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # return 0 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # '[' iso == iso ']' 00:35:12.768 19:42:35 nvmf_abort_qd_sizes -- nvmf/common.sh@477 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:14.671 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:14.671 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:14.671 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:14.671 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:14.671 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:14.931 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:16.309 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:35:16.309 19:42:39 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # nvmfpid=2368761 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # waitforlisten 2368761 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2368761 ']' 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:16.309 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:16.309 [2024-10-17 19:42:40.084875] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:35:16.309 [2024-10-17 19:42:40.084928] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.568 [2024-10-17 19:42:40.165853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:16.568 [2024-10-17 19:42:40.208833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.568 [2024-10-17 19:42:40.208869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.568 [2024-10-17 19:42:40.208876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.568 [2024-10-17 19:42:40.208887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.568 [2024-10-17 19:42:40.208895] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.568 [2024-10-17 19:42:40.210440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.568 [2024-10-17 19:42:40.210548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.568 [2024-10-17 19:42:40.210658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.568 [2024-10-17 19:42:40.210658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:17.508 19:42:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:17.508 ************************************ 00:35:17.508 START TEST spdk_target_abort 00:35:17.508 ************************************ 00:35:17.508 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:35:17.508 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:17.508 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:17.508 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:17.508 19:42:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.798 spdk_targetn1 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.798 [2024-10-17 19:42:43.847894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.798 [2024-10-17 19:42:43.884851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:20.798 19:42:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.087 Initializing NVMe Controllers 00:35:24.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:24.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:24.087 Initialization complete. Launching workers. 00:35:24.087 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 18092, failed: 0 00:35:24.087 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1357, failed to submit 16735 00:35:24.087 success 782, unsuccessful 575, failed 0 00:35:24.087 19:42:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.087 19:42:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:27.394 Initializing NVMe Controllers 00:35:27.394 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:27.394 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:27.394 Initialization complete. Launching workers. 00:35:27.394 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8470, failed: 0 00:35:27.394 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1211, failed to submit 7259 00:35:27.394 success 319, unsuccessful 892, failed 0 00:35:27.394 19:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:27.394 19:42:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.927 Initializing NVMe Controllers 00:35:29.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.927 Initialization complete. Launching workers. 00:35:29.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38498, failed: 0 00:35:29.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2898, failed to submit 35600 00:35:29.927 success 610, unsuccessful 2288, failed 0 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:29.927 19:42:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2368761 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2368761 ']' 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2368761 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:31.832 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2368761 00:35:32.090 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:32.090 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:32.090 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2368761' 00:35:32.091 killing process with pid 2368761 00:35:32.091 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2368761 00:35:32.091 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2368761 00:35:32.091 00:35:32.091 real 0m14.795s 00:35:32.091 user 0m58.848s 00:35:32.091 sys 0m2.716s 00:35:32.091 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:32.091 19:42:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:32.091 ************************************ 00:35:32.091 END TEST spdk_target_abort 00:35:32.091 ************************************ 00:35:32.091 19:42:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:32.091 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:32.091 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:32.091 19:42:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:32.350 ************************************ 00:35:32.350 START TEST kernel_target_abort 00:35:32.350 ************************************ 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@767 -- # local ip 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates=() 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # local -A ip_candidates 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z tcp ]] 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # ip=NVMF_INITIATOR_IP 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # [[ -z 10.0.0.1 ]] 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@781 -- # echo 10.0.0.1 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # local block nvme 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # modprobe nvmet 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:32.350 19:42:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:34.885 Waiting for block devices as requested 00:35:34.885 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:35.143 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:35.143 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:35.143 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:35.402 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:35.402 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:35.402 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:35.661 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:35.661 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:35.661 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:35.661 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:35.920 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:35.921 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:35.921 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:36.180 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:36.180 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:36.180 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:36.439 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:35:36.439 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:36.440 No valid GPT data, bailing 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # [[ -b /dev/nvme0n1 ]] 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # echo /dev/nvme0n1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 10.0.0.1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo tcp 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 4420 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo ipv4 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:36.440 00:35:36.440 Discovery Log Number of Records 2, Generation counter 2 00:35:36.440 =====Discovery Log Entry 0====== 00:35:36.440 trtype: tcp 00:35:36.440 adrfam: ipv4 00:35:36.440 subtype: current discovery subsystem 00:35:36.440 treq: not specified, sq flow control disable supported 00:35:36.440 portid: 1 00:35:36.440 trsvcid: 4420 00:35:36.440 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:36.440 traddr: 10.0.0.1 00:35:36.440 eflags: none 00:35:36.440 sectype: none 00:35:36.440 =====Discovery Log Entry 1====== 00:35:36.440 trtype: tcp 00:35:36.440 adrfam: ipv4 00:35:36.440 subtype: nvme subsystem 00:35:36.440 treq: not specified, sq flow control disable supported 00:35:36.440 portid: 1 00:35:36.440 trsvcid: 4420 00:35:36.440 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:36.440 traddr: 10.0.0.1 00:35:36.440 eflags: none 00:35:36.440 sectype: none 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:36.440 19:43:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:39.728 Initializing NVMe Controllers 00:35:39.728 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:39.728 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:39.728 Initialization complete. Launching workers. 00:35:39.729 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94504, failed: 0 00:35:39.729 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94504, failed to submit 0 00:35:39.729 success 0, unsuccessful 94504, failed 0 00:35:39.729 19:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:39.729 19:43:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.016 Initializing NVMe Controllers 00:35:43.016 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.016 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.016 Initialization complete. Launching workers. 00:35:43.016 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150699, failed: 0 00:35:43.016 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37842, failed to submit 112857 00:35:43.016 success 0, unsuccessful 37842, failed 0 00:35:43.016 19:43:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:43.016 19:43:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.304 Initializing NVMe Controllers 00:35:46.304 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.304 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:46.304 Initialization complete. Launching workers. 00:35:46.304 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141990, failed: 0 00:35:46.304 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35566, failed to submit 106424 00:35:46.304 success 0, unsuccessful 35566, failed 0 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # echo 0 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modprobe -r nvmet_tcp nvmet 00:35:46.304 19:43:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:48.837 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:48.838 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:50.331 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:50.331 00:35:50.331 real 0m18.048s 00:35:50.331 user 0m9.157s 00:35:50.331 sys 0m5.067s 00:35:50.331 19:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:50.331 19:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.331 ************************************ 00:35:50.331 END TEST kernel_target_abort 00:35:50.331 ************************************ 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # nvmfcleanup 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:50.331 19:43:13 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:50.331 rmmod nvme_tcp 00:35:50.331 rmmod nvme_fabrics 00:35:50.331 rmmod nvme_keyring 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@515 -- # '[' -n 2368761 ']' 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # killprocess 2368761 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2368761 ']' 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2368761 00:35:50.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2368761) - No such process 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2368761 is not found' 00:35:50.331 Process with pid 2368761 is not found 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # '[' iso == iso ']' 00:35:50.331 19:43:14 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.621 Waiting for block devices as requested 00:35:53.621 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.621 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:53.622 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:53.879 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:53.879 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:53.879 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:53.879 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:54.138 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:54.138 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:54.138 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:54.138 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:54.396 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-save 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@789 -- # iptables-restore 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:54.396 19:43:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.931 19:43:20 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.931 00:35:56.931 real 0m50.436s 00:35:56.931 user 1m12.416s 00:35:56.931 sys 0m16.536s 00:35:56.931 19:43:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:35:56.931 19:43:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:56.931 ************************************ 00:35:56.931 END TEST nvmf_abort_qd_sizes 00:35:56.931 ************************************ 00:35:56.931 19:43:20 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:56.931 19:43:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:35:56.931 19:43:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:35:56.931 19:43:20 -- common/autotest_common.sh@10 -- # set +x 00:35:56.931 ************************************ 00:35:56.931 START TEST keyring_file 00:35:56.931 ************************************ 00:35:56.931 19:43:20 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:56.931 * Looking for test storage... 00:35:56.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:56.931 19:43:20 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:56.931 19:43:20 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:35:56.931 19:43:20 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:56.931 19:43:20 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:56.931 19:43:20 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:56.932 19:43:20 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.932 19:43:20 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:56.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.932 --rc genhtml_branch_coverage=1 00:35:56.932 --rc genhtml_function_coverage=1 00:35:56.932 --rc genhtml_legend=1 00:35:56.932 --rc geninfo_all_blocks=1 00:35:56.932 --rc geninfo_unexecuted_blocks=1 00:35:56.932 00:35:56.932 ' 00:35:56.932 19:43:20 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:56.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.932 --rc genhtml_branch_coverage=1 00:35:56.932 --rc genhtml_function_coverage=1 00:35:56.932 --rc genhtml_legend=1 00:35:56.932 --rc geninfo_all_blocks=1 00:35:56.932 --rc geninfo_unexecuted_blocks=1 00:35:56.932 00:35:56.932 ' 00:35:56.932 19:43:20 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:56.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.932 --rc genhtml_branch_coverage=1 00:35:56.932 --rc genhtml_function_coverage=1 00:35:56.932 --rc genhtml_legend=1 00:35:56.932 --rc geninfo_all_blocks=1 00:35:56.932 --rc geninfo_unexecuted_blocks=1 00:35:56.932 00:35:56.932 ' 00:35:56.932 19:43:20 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:56.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.932 --rc genhtml_branch_coverage=1 00:35:56.932 --rc genhtml_function_coverage=1 00:35:56.932 --rc genhtml_legend=1 00:35:56.932 --rc geninfo_all_blocks=1 00:35:56.932 --rc geninfo_unexecuted_blocks=1 00:35:56.932 00:35:56.932 ' 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.932 19:43:20 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.932 19:43:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.932 19:43:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.932 19:43:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.932 19:43:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:56.932 19:43:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.AHTdfgVfNy 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AHTdfgVfNy 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.AHTdfgVfNy 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AHTdfgVfNy 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.foor2H66Qf 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:35:56.932 19:43:20 keyring_file -- nvmf/common.sh@731 -- # python - 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.foor2H66Qf 00:35:56.932 19:43:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.foor2H66Qf 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.foor2H66Qf 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=2377772 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:56.932 19:43:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2377772 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2377772 ']' 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:56.933 19:43:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.933 [2024-10-17 19:43:20.552181] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:35:56.933 [2024-10-17 19:43:20.552229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377772 ] 00:35:56.933 [2024-10-17 19:43:20.624902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.933 [2024-10-17 19:43:20.667232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:57.192 19:43:20 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.192 [2024-10-17 19:43:20.882355] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.192 null0 00:35:57.192 [2024-10-17 19:43:20.914412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.192 [2024-10-17 19:43:20.914762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:57.192 19:43:20 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:57.192 19:43:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.192 [2024-10-17 19:43:20.942472] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:57.192 request: 00:35:57.192 { 00:35:57.192 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.192 "secure_channel": false, 00:35:57.192 "listen_address": { 00:35:57.192 "trtype": "tcp", 00:35:57.192 "traddr": "127.0.0.1", 00:35:57.192 "trsvcid": "4420" 00:35:57.192 }, 00:35:57.192 "method": "nvmf_subsystem_add_listener", 00:35:57.192 "req_id": 1 00:35:57.192 } 00:35:57.192 Got JSON-RPC error response 00:35:57.192 response: 00:35:57.192 { 00:35:57.192 "code": -32602, 00:35:57.192 "message": "Invalid parameters" 00:35:57.192 } 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:35:57.193 19:43:20 keyring_file -- keyring/file.sh@47 -- # bperfpid=2377782 00:35:57.193 19:43:20 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2377782 /var/tmp/bperf.sock 00:35:57.193 19:43:20 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2377782 ']' 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:57.193 19:43:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.452 [2024-10-17 19:43:20.993941] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:35:57.452 [2024-10-17 19:43:20.993984] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2377782 ] 00:35:57.452 [2024-10-17 19:43:21.067411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.452 [2024-10-17 19:43:21.107221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.452 19:43:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.452 19:43:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:35:57.452 19:43:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:35:57.452 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:35:57.710 19:43:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.foor2H66Qf 00:35:57.710 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.foor2H66Qf 00:35:57.968 19:43:21 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:57.968 19:43:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:57.968 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:57.968 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:57.968 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.226 19:43:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.AHTdfgVfNy == \/\t\m\p\/\t\m\p\.\A\H\T\d\f\g\V\f\N\y ]] 00:35:58.226 19:43:21 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:58.226 19:43:21 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.226 19:43:21 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.foor2H66Qf == \/\t\m\p\/\t\m\p\.\f\o\o\r\2\H\6\6\Q\f ]] 00:35:58.226 19:43:21 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.226 19:43:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.484 19:43:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:58.484 19:43:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:58.484 19:43:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:58.484 19:43:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.484 19:43:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.484 19:43:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:58.484 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.742 19:43:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:58.742 19:43:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:58.742 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:59.000 [2024-10-17 19:43:22.533113] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:59.000 nvme0n1 00:35:59.000 19:43:22 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:59.000 19:43:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.000 19:43:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.000 19:43:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.000 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.000 19:43:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.259 19:43:22 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:59.259 19:43:22 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:59.259 19:43:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:59.259 19:43:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.259 19:43:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.259 19:43:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.259 19:43:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.259 19:43:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:59.259 19:43:23 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.518 Running I/O for 1 seconds... 00:36:00.453 19415.00 IOPS, 75.84 MiB/s 00:36:00.453 Latency(us) 00:36:00.453 [2024-10-17T17:43:24.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.453 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:00.453 nvme0n1 : 1.00 19457.02 76.00 0.00 0.00 6566.83 3885.35 11359.57 00:36:00.453 [2024-10-17T17:43:24.237Z] =================================================================================================================== 00:36:00.453 [2024-10-17T17:43:24.237Z] Total : 19457.02 76.00 0.00 0.00 6566.83 3885.35 11359.57 00:36:00.453 { 00:36:00.453 "results": [ 00:36:00.453 { 00:36:00.453 "job": "nvme0n1", 00:36:00.453 "core_mask": "0x2", 00:36:00.453 "workload": "randrw", 00:36:00.453 "percentage": 50, 00:36:00.453 "status": "finished", 00:36:00.453 "queue_depth": 128, 00:36:00.453 "io_size": 4096, 00:36:00.453 "runtime": 1.004419, 00:36:00.453 "iops": 19457.01943113382, 00:36:00.453 "mibps": 76.00398215286648, 00:36:00.453 "io_failed": 0, 00:36:00.453 "io_timeout": 0, 00:36:00.453 "avg_latency_us": 6566.828865481003, 00:36:00.453 "min_latency_us": 3885.3485714285716, 00:36:00.453 "max_latency_us": 11359.573333333334 00:36:00.453 } 00:36:00.453 ], 00:36:00.453 "core_count": 1 00:36:00.453 } 00:36:00.453 19:43:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:00.453 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:00.712 19:43:24 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:00.712 19:43:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:00.713 19:43:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:00.713 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:00.713 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:00.713 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.972 19:43:24 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:00.972 19:43:24 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:00.972 19:43:24 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:00.972 19:43:24 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:00.972 19:43:24 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:00.972 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:01.231 [2024-10-17 19:43:24.887678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:01.231 [2024-10-17 19:43:24.888390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ca2a0 (107): Transport endpoint is not connected 00:36:01.231 [2024-10-17 19:43:24.889385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ca2a0 (9): Bad file descriptor 00:36:01.231 [2024-10-17 19:43:24.890386] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:01.231 [2024-10-17 19:43:24.890396] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:01.231 [2024-10-17 19:43:24.890404] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:01.231 [2024-10-17 19:43:24.890413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:01.231 request: 00:36:01.231 { 00:36:01.231 "name": "nvme0", 00:36:01.231 "trtype": "tcp", 00:36:01.231 "traddr": "127.0.0.1", 00:36:01.231 "adrfam": "ipv4", 00:36:01.231 "trsvcid": "4420", 00:36:01.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:01.231 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:01.231 "prchk_reftag": false, 00:36:01.231 "prchk_guard": false, 00:36:01.231 "hdgst": false, 00:36:01.231 "ddgst": false, 00:36:01.231 "psk": "key1", 00:36:01.231 "allow_unrecognized_csi": false, 00:36:01.231 "method": "bdev_nvme_attach_controller", 00:36:01.231 "req_id": 1 00:36:01.231 } 00:36:01.231 Got JSON-RPC error response 00:36:01.231 response: 00:36:01.231 { 00:36:01.231 "code": -5, 00:36:01.231 "message": "Input/output error" 00:36:01.231 } 00:36:01.231 19:43:24 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:01.231 19:43:24 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:01.231 19:43:24 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:01.231 19:43:24 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:01.231 19:43:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:01.231 19:43:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:01.231 19:43:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.231 19:43:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.231 19:43:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.231 19:43:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:01.489 19:43:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:01.489 19:43:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:01.489 19:43:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:01.489 19:43:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.489 19:43:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.489 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.489 19:43:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:01.748 19:43:25 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:01.748 19:43:25 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:01.748 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:01.748 19:43:25 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:01.748 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:02.006 19:43:25 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:02.006 19:43:25 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:02.006 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.265 19:43:25 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:02.265 19:43:25 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.AHTdfgVfNy 00:36:02.265 19:43:25 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.265 19:43:25 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.265 19:43:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.265 [2024-10-17 19:43:26.035417] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AHTdfgVfNy': 0100660 00:36:02.265 [2024-10-17 19:43:26.035442] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:02.265 request: 00:36:02.265 { 00:36:02.265 "name": "key0", 00:36:02.265 "path": "/tmp/tmp.AHTdfgVfNy", 00:36:02.265 "method": "keyring_file_add_key", 00:36:02.265 "req_id": 1 00:36:02.265 } 00:36:02.265 Got JSON-RPC error response 00:36:02.265 response: 00:36:02.265 { 00:36:02.265 "code": -1, 00:36:02.265 "message": "Operation not permitted" 00:36:02.265 } 00:36:02.265 19:43:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:02.265 19:43:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:02.524 19:43:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:02.524 19:43:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:02.524 19:43:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.AHTdfgVfNy 00:36:02.524 19:43:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AHTdfgVfNy 00:36:02.524 19:43:26 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.AHTdfgVfNy 00:36:02.524 19:43:26 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.524 19:43:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.783 19:43:26 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:02.783 19:43:26 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:02.783 19:43:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:02.783 19:43:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:03.041 [2024-10-17 19:43:26.633016] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AHTdfgVfNy': No such file or directory 00:36:03.041 [2024-10-17 19:43:26.633039] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:03.041 [2024-10-17 19:43:26.633055] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:03.041 [2024-10-17 19:43:26.633061] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:03.041 [2024-10-17 19:43:26.633069] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:03.041 [2024-10-17 19:43:26.633076] bdev_nvme.c:6438:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:03.041 request: 00:36:03.041 { 00:36:03.041 "name": "nvme0", 00:36:03.041 "trtype": "tcp", 00:36:03.041 "traddr": "127.0.0.1", 00:36:03.041 "adrfam": "ipv4", 00:36:03.041 "trsvcid": "4420", 00:36:03.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.041 "prchk_reftag": false, 00:36:03.041 "prchk_guard": false, 00:36:03.041 "hdgst": false, 00:36:03.041 "ddgst": false, 00:36:03.041 "psk": "key0", 00:36:03.041 "allow_unrecognized_csi": false, 00:36:03.041 "method": "bdev_nvme_attach_controller", 00:36:03.041 "req_id": 1 00:36:03.041 } 00:36:03.041 Got JSON-RPC error response 00:36:03.041 response: 00:36:03.041 { 00:36:03.041 "code": -19, 00:36:03.041 "message": "No such device" 00:36:03.041 } 00:36:03.041 19:43:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:03.041 19:43:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:03.041 19:43:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:03.041 19:43:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:03.041 19:43:26 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:03.041 19:43:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:03.299 19:43:26 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@728 -- # local prefix key digest 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@730 -- # digest=0 00:36:03.299 19:43:26 keyring_file -- nvmf/common.sh@731 -- # python - 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:26 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qv8ySsuWRs 00:36:03.299 19:43:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:03.299 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:03.557 nvme0n1 00:36:03.557 19:43:27 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:03.557 19:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:03.557 19:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:03.557 19:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:03.557 19:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:03.557 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.815 19:43:27 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:03.815 19:43:27 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:03.815 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:04.074 19:43:27 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:04.074 19:43:27 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:04.074 19:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.074 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.074 19:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:04.332 19:43:27 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:04.332 19:43:27 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:04.332 19:43:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:04.332 19:43:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:04.332 19:43:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.332 19:43:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:04.332 19:43:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.332 19:43:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:04.332 19:43:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:04.332 19:43:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:04.590 19:43:28 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:04.590 19:43:28 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:04.590 19:43:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.848 19:43:28 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:04.848 19:43:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qv8ySsuWRs 00:36:04.848 19:43:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qv8ySsuWRs 00:36:05.107 19:43:28 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.foor2H66Qf 00:36:05.107 19:43:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.foor2H66Qf 00:36:05.107 19:43:28 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.107 19:43:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.366 nvme0n1 00:36:05.366 19:43:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:05.366 19:43:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:05.624 19:43:29 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:05.624 "subsystems": [ 00:36:05.624 { 00:36:05.624 "subsystem": "keyring", 00:36:05.624 "config": [ 00:36:05.624 { 00:36:05.624 "method": "keyring_file_add_key", 00:36:05.624 "params": { 00:36:05.624 "name": "key0", 00:36:05.624 "path": "/tmp/tmp.Qv8ySsuWRs" 00:36:05.624 } 00:36:05.624 }, 00:36:05.624 { 00:36:05.624 "method": "keyring_file_add_key", 00:36:05.624 "params": { 00:36:05.624 "name": "key1", 00:36:05.624 "path": "/tmp/tmp.foor2H66Qf" 00:36:05.624 } 00:36:05.624 } 00:36:05.624 ] 00:36:05.624 }, 00:36:05.624 { 00:36:05.624 "subsystem": "iobuf", 00:36:05.624 "config": [ 00:36:05.624 { 00:36:05.624 "method": "iobuf_set_options", 00:36:05.624 "params": { 00:36:05.624 "small_pool_count": 8192, 00:36:05.624 "large_pool_count": 1024, 00:36:05.624 "small_bufsize": 8192, 00:36:05.624 "large_bufsize": 135168, 00:36:05.624 "enable_numa": false 00:36:05.624 } 00:36:05.624 } 00:36:05.624 ] 00:36:05.624 }, 00:36:05.624 { 00:36:05.624 "subsystem": "sock", 00:36:05.624 "config": [ 00:36:05.624 { 00:36:05.624 "method": "sock_set_default_impl", 00:36:05.624 "params": { 00:36:05.624 "impl_name": "posix" 00:36:05.624 } 00:36:05.624 }, 00:36:05.624 { 00:36:05.624 "method": "sock_impl_set_options", 00:36:05.624 "params": { 00:36:05.624 "impl_name": "ssl", 00:36:05.624 "recv_buf_size": 4096, 00:36:05.624 "send_buf_size": 4096, 00:36:05.624 "enable_recv_pipe": true, 00:36:05.624 "enable_quickack": false, 00:36:05.624 "enable_placement_id": 0, 00:36:05.624 "enable_zerocopy_send_server": true, 00:36:05.624 "enable_zerocopy_send_client": false, 00:36:05.624 "zerocopy_threshold": 0, 00:36:05.624 "tls_version": 0, 00:36:05.624 "enable_ktls": false 00:36:05.624 } 00:36:05.624 }, 00:36:05.624 { 00:36:05.624 "method": "sock_impl_set_options", 00:36:05.624 "params": { 00:36:05.624 "impl_name": "posix", 00:36:05.624 "recv_buf_size": 2097152, 00:36:05.624 "send_buf_size": 2097152, 00:36:05.624 "enable_recv_pipe": true, 00:36:05.624 "enable_quickack": false, 00:36:05.624 "enable_placement_id": 0, 00:36:05.624 "enable_zerocopy_send_server": true, 00:36:05.624 "enable_zerocopy_send_client": false, 00:36:05.624 "zerocopy_threshold": 0, 00:36:05.624 "tls_version": 0, 00:36:05.624 "enable_ktls": false 00:36:05.624 } 00:36:05.624 } 00:36:05.624 ] 00:36:05.624 }, 00:36:05.625 { 00:36:05.625 "subsystem": "vmd", 00:36:05.625 "config": [] 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "subsystem": "accel", 00:36:05.625 "config": [ 00:36:05.625 { 00:36:05.625 "method": "accel_set_options", 00:36:05.625 "params": { 00:36:05.625 "small_cache_size": 128, 00:36:05.625 "large_cache_size": 16, 00:36:05.625 "task_count": 2048, 00:36:05.625 "sequence_count": 2048, 00:36:05.625 "buf_count": 2048 00:36:05.625 } 00:36:05.625 } 00:36:05.625 ] 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "subsystem": "bdev", 00:36:05.625 "config": [ 00:36:05.625 { 00:36:05.625 "method": "bdev_set_options", 00:36:05.625 "params": { 00:36:05.625 "bdev_io_pool_size": 65535, 00:36:05.625 "bdev_io_cache_size": 256, 00:36:05.625 "bdev_auto_examine": true, 00:36:05.625 "iobuf_small_cache_size": 128, 00:36:05.625 "iobuf_large_cache_size": 16 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_raid_set_options", 00:36:05.625 "params": { 00:36:05.625 "process_window_size_kb": 1024, 00:36:05.625 "process_max_bandwidth_mb_sec": 0 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_iscsi_set_options", 00:36:05.625 "params": { 00:36:05.625 "timeout_sec": 30 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_nvme_set_options", 00:36:05.625 "params": { 00:36:05.625 "action_on_timeout": "none", 00:36:05.625 "timeout_us": 0, 00:36:05.625 "timeout_admin_us": 0, 00:36:05.625 "keep_alive_timeout_ms": 10000, 00:36:05.625 "arbitration_burst": 0, 00:36:05.625 "low_priority_weight": 0, 00:36:05.625 "medium_priority_weight": 0, 00:36:05.625 "high_priority_weight": 0, 00:36:05.625 "nvme_adminq_poll_period_us": 10000, 00:36:05.625 "nvme_ioq_poll_period_us": 0, 00:36:05.625 "io_queue_requests": 512, 00:36:05.625 "delay_cmd_submit": true, 00:36:05.625 "transport_retry_count": 4, 00:36:05.625 "bdev_retry_count": 3, 00:36:05.625 "transport_ack_timeout": 0, 00:36:05.625 "ctrlr_loss_timeout_sec": 0, 00:36:05.625 "reconnect_delay_sec": 0, 00:36:05.625 "fast_io_fail_timeout_sec": 0, 00:36:05.625 "disable_auto_failback": false, 00:36:05.625 "generate_uuids": false, 00:36:05.625 "transport_tos": 0, 00:36:05.625 "nvme_error_stat": false, 00:36:05.625 "rdma_srq_size": 0, 00:36:05.625 "io_path_stat": false, 00:36:05.625 "allow_accel_sequence": false, 00:36:05.625 "rdma_max_cq_size": 0, 00:36:05.625 "rdma_cm_event_timeout_ms": 0, 00:36:05.625 "dhchap_digests": [ 00:36:05.625 "sha256", 00:36:05.625 "sha384", 00:36:05.625 "sha512" 00:36:05.625 ], 00:36:05.625 "dhchap_dhgroups": [ 00:36:05.625 "null", 00:36:05.625 "ffdhe2048", 00:36:05.625 "ffdhe3072", 00:36:05.625 "ffdhe4096", 00:36:05.625 "ffdhe6144", 00:36:05.625 "ffdhe8192" 00:36:05.625 ] 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_nvme_attach_controller", 00:36:05.625 "params": { 00:36:05.625 "name": "nvme0", 00:36:05.625 "trtype": "TCP", 00:36:05.625 "adrfam": "IPv4", 00:36:05.625 "traddr": "127.0.0.1", 00:36:05.625 "trsvcid": "4420", 00:36:05.625 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.625 "prchk_reftag": false, 00:36:05.625 "prchk_guard": false, 00:36:05.625 "ctrlr_loss_timeout_sec": 0, 00:36:05.625 "reconnect_delay_sec": 0, 00:36:05.625 "fast_io_fail_timeout_sec": 0, 00:36:05.625 "psk": "key0", 00:36:05.625 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.625 "hdgst": false, 00:36:05.625 "ddgst": false, 00:36:05.625 "multipath": "multipath" 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_nvme_set_hotplug", 00:36:05.625 "params": { 00:36:05.625 "period_us": 100000, 00:36:05.625 "enable": false 00:36:05.625 } 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "method": "bdev_wait_for_examine" 00:36:05.625 } 00:36:05.625 ] 00:36:05.625 }, 00:36:05.625 { 00:36:05.625 "subsystem": "nbd", 00:36:05.625 "config": [] 00:36:05.625 } 00:36:05.625 ] 00:36:05.625 }' 00:36:05.625 19:43:29 keyring_file -- keyring/file.sh@115 -- # killprocess 2377782 00:36:05.625 19:43:29 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2377782 ']' 00:36:05.625 19:43:29 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2377782 00:36:05.625 19:43:29 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:05.625 19:43:29 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:05.625 19:43:29 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2377782 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2377782' 00:36:05.885 killing process with pid 2377782 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@969 -- # kill 2377782 00:36:05.885 Received shutdown signal, test time was about 1.000000 seconds 00:36:05.885 00:36:05.885 Latency(us) 00:36:05.885 [2024-10-17T17:43:29.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.885 [2024-10-17T17:43:29.669Z] =================================================================================================================== 00:36:05.885 [2024-10-17T17:43:29.669Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@974 -- # wait 2377782 00:36:05.885 19:43:29 keyring_file -- keyring/file.sh@118 -- # bperfpid=2379298 00:36:05.885 19:43:29 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2379298 /var/tmp/bperf.sock 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 2379298 ']' 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.885 19:43:29 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:05.885 19:43:29 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.885 19:43:29 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:05.885 "subsystems": [ 00:36:05.885 { 00:36:05.885 "subsystem": "keyring", 00:36:05.885 "config": [ 00:36:05.885 { 00:36:05.885 "method": "keyring_file_add_key", 00:36:05.885 "params": { 00:36:05.885 "name": "key0", 00:36:05.885 "path": "/tmp/tmp.Qv8ySsuWRs" 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "keyring_file_add_key", 00:36:05.885 "params": { 00:36:05.885 "name": "key1", 00:36:05.885 "path": "/tmp/tmp.foor2H66Qf" 00:36:05.885 } 00:36:05.885 } 00:36:05.885 ] 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "subsystem": "iobuf", 00:36:05.885 "config": [ 00:36:05.885 { 00:36:05.885 "method": "iobuf_set_options", 00:36:05.885 "params": { 00:36:05.885 "small_pool_count": 8192, 00:36:05.885 "large_pool_count": 1024, 00:36:05.885 "small_bufsize": 8192, 00:36:05.885 "large_bufsize": 135168, 00:36:05.885 "enable_numa": false 00:36:05.885 } 00:36:05.885 } 00:36:05.885 ] 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "subsystem": "sock", 00:36:05.885 "config": [ 00:36:05.885 { 00:36:05.885 "method": "sock_set_default_impl", 00:36:05.885 "params": { 00:36:05.885 "impl_name": "posix" 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "sock_impl_set_options", 00:36:05.885 "params": { 00:36:05.885 "impl_name": "ssl", 00:36:05.885 "recv_buf_size": 4096, 00:36:05.885 "send_buf_size": 4096, 00:36:05.885 "enable_recv_pipe": true, 00:36:05.885 "enable_quickack": false, 00:36:05.885 "enable_placement_id": 0, 00:36:05.885 "enable_zerocopy_send_server": true, 00:36:05.885 "enable_zerocopy_send_client": false, 00:36:05.885 "zerocopy_threshold": 0, 00:36:05.885 "tls_version": 0, 00:36:05.885 "enable_ktls": false 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "sock_impl_set_options", 00:36:05.885 "params": { 00:36:05.885 "impl_name": "posix", 00:36:05.885 "recv_buf_size": 2097152, 00:36:05.885 "send_buf_size": 2097152, 00:36:05.885 "enable_recv_pipe": true, 00:36:05.885 "enable_quickack": false, 00:36:05.885 "enable_placement_id": 0, 00:36:05.885 "enable_zerocopy_send_server": true, 00:36:05.885 "enable_zerocopy_send_client": false, 00:36:05.885 "zerocopy_threshold": 0, 00:36:05.885 "tls_version": 0, 00:36:05.885 "enable_ktls": false 00:36:05.885 } 00:36:05.885 } 00:36:05.885 ] 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "subsystem": "vmd", 00:36:05.885 "config": [] 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "subsystem": "accel", 00:36:05.885 "config": [ 00:36:05.885 { 00:36:05.885 "method": "accel_set_options", 00:36:05.885 "params": { 00:36:05.885 "small_cache_size": 128, 00:36:05.885 "large_cache_size": 16, 00:36:05.885 "task_count": 2048, 00:36:05.885 "sequence_count": 2048, 00:36:05.885 "buf_count": 2048 00:36:05.885 } 00:36:05.885 } 00:36:05.885 ] 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "subsystem": "bdev", 00:36:05.885 "config": [ 00:36:05.885 { 00:36:05.885 "method": "bdev_set_options", 00:36:05.885 "params": { 00:36:05.885 "bdev_io_pool_size": 65535, 00:36:05.885 "bdev_io_cache_size": 256, 00:36:05.885 "bdev_auto_examine": true, 00:36:05.885 "iobuf_small_cache_size": 128, 00:36:05.885 "iobuf_large_cache_size": 16 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "bdev_raid_set_options", 00:36:05.885 "params": { 00:36:05.885 "process_window_size_kb": 1024, 00:36:05.885 "process_max_bandwidth_mb_sec": 0 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "bdev_iscsi_set_options", 00:36:05.885 "params": { 00:36:05.885 "timeout_sec": 30 00:36:05.885 } 00:36:05.885 }, 00:36:05.885 { 00:36:05.885 "method": "bdev_nvme_set_options", 00:36:05.885 "params": { 00:36:05.885 "action_on_timeout": "none", 00:36:05.885 "timeout_us": 0, 00:36:05.885 "timeout_admin_us": 0, 00:36:05.885 "keep_alive_timeout_ms": 10000, 00:36:05.885 "arbitration_burst": 0, 00:36:05.885 "low_priority_weight": 0, 00:36:05.885 "medium_priority_weight": 0, 00:36:05.886 "high_priority_weight": 0, 00:36:05.886 "nvme_adminq_poll_period_us": 10000, 00:36:05.886 "nvme_ioq_poll_period_us": 0, 00:36:05.886 "io_queue_requests": 512, 00:36:05.886 "delay_cmd_submit": true, 00:36:05.886 "transport_retry_count": 4, 00:36:05.886 "bdev_retry_count": 3, 00:36:05.886 "transport_ack_timeout": 0, 00:36:05.886 "ctrlr_loss_timeout_sec": 0, 00:36:05.886 "reconnect_delay_sec": 0, 00:36:05.886 "fast_io_fail_timeout_sec": 0, 00:36:05.886 "disable_auto_failback": false, 00:36:05.886 "generate_uuids": false, 00:36:05.886 "transport_tos": 0, 00:36:05.886 "nvme_error_stat": false, 00:36:05.886 "rdma_srq_size": 0, 00:36:05.886 "io_path_stat": false, 00:36:05.886 "allow_accel_sequence": false, 00:36:05.886 "rdma_max_cq_size": 0, 00:36:05.886 "rdma_cm_event_timeout_ms": 0, 00:36:05.886 "dhchap_digests": [ 00:36:05.886 "sha256", 00:36:05.886 "sha384", 00:36:05.886 "sha512" 00:36:05.886 ], 00:36:05.886 "dhchap_dhgroups": [ 00:36:05.886 "null", 00:36:05.886 "ffdhe2048", 00:36:05.886 "ffdhe3072", 00:36:05.886 "ffdhe4096", 00:36:05.886 "ffdhe6144", 00:36:05.886 "ffdhe8192" 00:36:05.886 ] 00:36:05.886 } 00:36:05.886 }, 00:36:05.886 { 00:36:05.886 "method": "bdev_nvme_attach_controller", 00:36:05.886 "params": { 00:36:05.886 "name": "nvme0", 00:36:05.886 "trtype": "TCP", 00:36:05.886 "adrfam": "IPv4", 00:36:05.886 "traddr": "127.0.0.1", 00:36:05.886 "trsvcid": "4420", 00:36:05.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.886 "prchk_reftag": false, 00:36:05.886 "prchk_guard": false, 00:36:05.886 "ctrlr_loss_timeout_sec": 0, 00:36:05.886 "reconnect_delay_sec": 0, 00:36:05.886 "fast_io_fail_timeout_sec": 0, 00:36:05.886 "psk": "key0", 00:36:05.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.886 "hdgst": false, 00:36:05.886 "ddgst": false, 00:36:05.886 "multipath": "multipath" 00:36:05.886 } 00:36:05.886 }, 00:36:05.886 { 00:36:05.886 "method": "bdev_nvme_set_hotplug", 00:36:05.886 "params": { 00:36:05.886 "period_us": 100000, 00:36:05.886 "enable": false 00:36:05.886 } 00:36:05.886 }, 00:36:05.886 { 00:36:05.886 "method": "bdev_wait_for_examine" 00:36:05.886 } 00:36:05.886 ] 00:36:05.886 }, 00:36:05.886 { 00:36:05.886 "subsystem": "nbd", 00:36:05.886 "config": [] 00:36:05.886 } 00:36:05.886 ] 00:36:05.886 }' 00:36:05.886 19:43:29 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:05.886 19:43:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.886 [2024-10-17 19:43:29.639689] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:36:05.886 [2024-10-17 19:43:29.639735] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379298 ] 00:36:06.144 [2024-10-17 19:43:29.714154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.144 [2024-10-17 19:43:29.755922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.144 [2024-10-17 19:43:29.915042] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:06.712 19:43:30 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:06.712 19:43:30 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:36:06.712 19:43:30 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:06.712 19:43:30 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:06.712 19:43:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.971 19:43:30 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:06.971 19:43:30 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:06.971 19:43:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.971 19:43:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.971 19:43:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.971 19:43:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.971 19:43:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.230 19:43:30 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:07.230 19:43:30 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:07.230 19:43:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:07.230 19:43:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:07.230 19:43:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:07.230 19:43:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.230 19:43:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:07.489 19:43:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:07.489 19:43:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Qv8ySsuWRs /tmp/tmp.foor2H66Qf 00:36:07.747 19:43:31 keyring_file -- keyring/file.sh@20 -- # killprocess 2379298 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2379298 ']' 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2379298 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379298 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379298' 00:36:07.747 killing process with pid 2379298 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@969 -- # kill 2379298 00:36:07.747 Received shutdown signal, test time was about 1.000000 seconds 00:36:07.747 00:36:07.747 Latency(us) 00:36:07.747 [2024-10-17T17:43:31.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:07.747 [2024-10-17T17:43:31.531Z] =================================================================================================================== 00:36:07.747 [2024-10-17T17:43:31.531Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@974 -- # wait 2379298 00:36:07.747 19:43:31 keyring_file -- keyring/file.sh@21 -- # killprocess 2377772 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 2377772 ']' 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@954 -- # kill -0 2377772 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@955 -- # uname 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:07.747 19:43:31 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2377772 00:36:08.005 19:43:31 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:08.005 19:43:31 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:08.005 19:43:31 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2377772' 00:36:08.005 killing process with pid 2377772 00:36:08.005 19:43:31 keyring_file -- common/autotest_common.sh@969 -- # kill 2377772 00:36:08.005 19:43:31 keyring_file -- common/autotest_common.sh@974 -- # wait 2377772 00:36:08.264 00:36:08.264 real 0m11.642s 00:36:08.264 user 0m28.853s 00:36:08.264 sys 0m2.745s 00:36:08.264 19:43:31 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:08.264 19:43:31 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:08.264 ************************************ 00:36:08.264 END TEST keyring_file 00:36:08.264 ************************************ 00:36:08.264 19:43:31 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:08.264 19:43:31 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:08.264 19:43:31 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:36:08.264 19:43:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:36:08.264 19:43:31 -- common/autotest_common.sh@10 -- # set +x 00:36:08.264 ************************************ 00:36:08.264 START TEST keyring_linux 00:36:08.264 ************************************ 00:36:08.264 19:43:31 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:08.264 Joined session keyring: 666756614 00:36:08.264 * Looking for test storage... 00:36:08.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:08.264 19:43:31 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:08.264 19:43:31 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:36:08.264 19:43:31 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:08.522 19:43:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.522 --rc genhtml_branch_coverage=1 00:36:08.522 --rc genhtml_function_coverage=1 00:36:08.522 --rc genhtml_legend=1 00:36:08.522 --rc geninfo_all_blocks=1 00:36:08.522 --rc geninfo_unexecuted_blocks=1 00:36:08.522 00:36:08.522 ' 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.522 --rc genhtml_branch_coverage=1 00:36:08.522 --rc genhtml_function_coverage=1 00:36:08.522 --rc genhtml_legend=1 00:36:08.522 --rc geninfo_all_blocks=1 00:36:08.522 --rc geninfo_unexecuted_blocks=1 00:36:08.522 00:36:08.522 ' 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.522 --rc genhtml_branch_coverage=1 00:36:08.522 --rc genhtml_function_coverage=1 00:36:08.522 --rc genhtml_legend=1 00:36:08.522 --rc geninfo_all_blocks=1 00:36:08.522 --rc geninfo_unexecuted_blocks=1 00:36:08.522 00:36:08.522 ' 00:36:08.522 19:43:32 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:08.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:08.522 --rc genhtml_branch_coverage=1 00:36:08.522 --rc genhtml_function_coverage=1 00:36:08.522 --rc genhtml_legend=1 00:36:08.522 --rc geninfo_all_blocks=1 00:36:08.522 --rc geninfo_unexecuted_blocks=1 00:36:08.523 00:36:08.523 ' 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:08.523 19:43:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:08.523 19:43:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:08.523 19:43:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:08.523 19:43:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:08.523 19:43:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.523 19:43:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.523 19:43:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.523 19:43:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:08.523 19:43:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:08.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:08.523 /tmp/:spdk-test:key0 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@728 -- # local prefix key digest 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # key=112233445566778899aabbccddeeff00 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@730 -- # digest=0 00:36:08.523 19:43:32 keyring_linux -- nvmf/common.sh@731 -- # python - 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:08.523 19:43:32 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:08.523 /tmp/:spdk-test:key1 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2379847 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2379847 00:36:08.523 19:43:32 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2379847 ']' 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:08.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:08.523 19:43:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:08.523 [2024-10-17 19:43:32.243724] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:36:08.523 [2024-10-17 19:43:32.243772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379847 ] 00:36:08.782 [2024-10-17 19:43:32.318711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.782 [2024-10-17 19:43:32.360134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.782 19:43:32 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:08.782 19:43:32 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:08.782 19:43:32 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:08.782 19:43:32 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:08.782 19:43:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:09.040 [2024-10-17 19:43:32.572798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.040 null0 00:36:09.040 [2024-10-17 19:43:32.604835] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:09.040 [2024-10-17 19:43:32.605182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:09.041 19:43:32 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:09.041 325838476 00:36:09.041 19:43:32 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:09.041 375042759 00:36:09.041 19:43:32 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2379864 00:36:09.041 19:43:32 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2379864 /var/tmp/bperf.sock 00:36:09.041 19:43:32 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 2379864 ']' 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:09.041 19:43:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:09.041 [2024-10-17 19:43:32.676313] Starting SPDK v25.01-pre git sha1 23f83d500 / DPDK 24.03.0 initialization... 00:36:09.041 [2024-10-17 19:43:32.676356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379864 ] 00:36:09.041 [2024-10-17 19:43:32.747593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.041 [2024-10-17 19:43:32.787523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.299 19:43:32 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:09.299 19:43:32 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:36:09.299 19:43:32 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:09.299 19:43:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:09.299 19:43:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:09.299 19:43:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:09.558 19:43:33 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:09.558 19:43:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:09.816 [2024-10-17 19:43:33.439827] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:09.816 nvme0n1 00:36:09.816 19:43:33 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:09.816 19:43:33 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:09.816 19:43:33 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:09.816 19:43:33 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:09.816 19:43:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.816 19:43:33 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:10.075 19:43:33 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:10.075 19:43:33 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:10.075 19:43:33 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:10.075 19:43:33 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:10.075 19:43:33 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:10.075 19:43:33 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.075 19:43:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.334 19:43:33 keyring_linux -- keyring/linux.sh@25 -- # sn=325838476 00:36:10.334 19:43:33 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:10.334 19:43:33 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:10.335 19:43:33 keyring_linux -- keyring/linux.sh@26 -- # [[ 325838476 == \3\2\5\8\3\8\4\7\6 ]] 00:36:10.335 19:43:33 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 325838476 00:36:10.335 19:43:33 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:10.335 19:43:33 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:10.335 Running I/O for 1 seconds... 00:36:11.271 21771.00 IOPS, 85.04 MiB/s 00:36:11.271 Latency(us) 00:36:11.271 [2024-10-17T17:43:35.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:11.271 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:11.271 nvme0n1 : 1.01 21772.35 85.05 0.00 0.00 5860.24 4587.52 10485.76 00:36:11.271 [2024-10-17T17:43:35.055Z] =================================================================================================================== 00:36:11.271 [2024-10-17T17:43:35.055Z] Total : 21772.35 85.05 0.00 0.00 5860.24 4587.52 10485.76 00:36:11.271 { 00:36:11.271 "results": [ 00:36:11.271 { 00:36:11.271 "job": "nvme0n1", 00:36:11.271 "core_mask": "0x2", 00:36:11.271 "workload": "randread", 00:36:11.271 "status": "finished", 00:36:11.271 "queue_depth": 128, 00:36:11.271 "io_size": 4096, 00:36:11.271 "runtime": 1.005863, 00:36:11.271 "iops": 21772.34871945782, 00:36:11.271 "mibps": 85.0482371853821, 00:36:11.271 "io_failed": 0, 00:36:11.271 "io_timeout": 0, 00:36:11.271 "avg_latency_us": 5860.242942552729, 00:36:11.271 "min_latency_us": 4587.52, 00:36:11.271 "max_latency_us": 10485.76 00:36:11.271 } 00:36:11.271 ], 00:36:11.271 "core_count": 1 00:36:11.271 } 00:36:11.271 19:43:35 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:11.271 19:43:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:11.530 19:43:35 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:11.530 19:43:35 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:11.530 19:43:35 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:11.530 19:43:35 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:11.530 19:43:35 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:11.530 19:43:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.789 19:43:35 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:11.789 19:43:35 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:11.789 19:43:35 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:11.789 19:43:35 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:11.789 19:43:35 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:11.789 19:43:35 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:12.049 [2024-10-17 19:43:35.613549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:12.049 [2024-10-17 19:43:35.614303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3030 (107): Transport endpoint is not connected 00:36:12.049 [2024-10-17 19:43:35.615299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3030 (9): Bad file descriptor 00:36:12.049 [2024-10-17 19:43:35.616300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:12.049 [2024-10-17 19:43:35.616310] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:12.049 [2024-10-17 19:43:35.616318] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:12.049 [2024-10-17 19:43:35.616326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:12.049 request: 00:36:12.049 { 00:36:12.049 "name": "nvme0", 00:36:12.049 "trtype": "tcp", 00:36:12.049 "traddr": "127.0.0.1", 00:36:12.049 "adrfam": "ipv4", 00:36:12.049 "trsvcid": "4420", 00:36:12.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.049 "prchk_reftag": false, 00:36:12.049 "prchk_guard": false, 00:36:12.049 "hdgst": false, 00:36:12.049 "ddgst": false, 00:36:12.049 "psk": ":spdk-test:key1", 00:36:12.049 "allow_unrecognized_csi": false, 00:36:12.049 "method": "bdev_nvme_attach_controller", 00:36:12.049 "req_id": 1 00:36:12.049 } 00:36:12.049 Got JSON-RPC error response 00:36:12.049 response: 00:36:12.049 { 00:36:12.049 "code": -5, 00:36:12.049 "message": "Input/output error" 00:36:12.049 } 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@33 -- # sn=325838476 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 325838476 00:36:12.049 1 links removed 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@33 -- # sn=375042759 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 375042759 00:36:12.049 1 links removed 00:36:12.049 19:43:35 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2379864 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2379864 ']' 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2379864 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379864 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379864' 00:36:12.049 killing process with pid 2379864 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@969 -- # kill 2379864 00:36:12.049 Received shutdown signal, test time was about 1.000000 seconds 00:36:12.049 00:36:12.049 Latency(us) 00:36:12.049 [2024-10-17T17:43:35.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.049 [2024-10-17T17:43:35.833Z] =================================================================================================================== 00:36:12.049 [2024-10-17T17:43:35.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.049 19:43:35 keyring_linux -- common/autotest_common.sh@974 -- # wait 2379864 00:36:12.308 19:43:35 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2379847 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 2379847 ']' 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 2379847 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379847 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379847' 00:36:12.308 killing process with pid 2379847 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@969 -- # kill 2379847 00:36:12.308 19:43:35 keyring_linux -- common/autotest_common.sh@974 -- # wait 2379847 00:36:12.567 00:36:12.567 real 0m4.317s 00:36:12.567 user 0m8.122s 00:36:12.567 sys 0m1.427s 00:36:12.567 19:43:36 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:12.567 19:43:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:12.567 ************************************ 00:36:12.567 END TEST keyring_linux 00:36:12.567 ************************************ 00:36:12.567 19:43:36 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:12.567 19:43:36 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:12.567 19:43:36 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:12.567 19:43:36 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:12.567 19:43:36 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:12.567 19:43:36 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:12.567 19:43:36 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:12.567 19:43:36 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:12.567 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:36:12.567 19:43:36 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:12.567 19:43:36 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:12.567 19:43:36 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:12.567 19:43:36 -- common/autotest_common.sh@10 -- # set +x 00:36:17.838 INFO: APP EXITING 00:36:17.838 INFO: killing all VMs 00:36:17.838 INFO: killing vhost app 00:36:17.838 INFO: EXIT DONE 00:36:20.374 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:20.374 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:20.374 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:23.663 Cleaning 00:36:23.663 Removing: /var/run/dpdk/spdk0/config 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:23.663 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:23.663 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:23.663 Removing: /var/run/dpdk/spdk1/config 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:23.663 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:23.663 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:23.663 Removing: /var/run/dpdk/spdk2/config 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:23.663 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:23.663 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:23.663 Removing: /var/run/dpdk/spdk3/config 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:23.663 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:23.663 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:23.663 Removing: /var/run/dpdk/spdk4/config 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:23.663 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:23.663 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:23.663 Removing: /dev/shm/bdev_svc_trace.1 00:36:23.663 Removing: /dev/shm/nvmf_trace.0 00:36:23.663 Removing: /dev/shm/spdk_tgt_trace.pid1904931 00:36:23.663 Removing: /var/run/dpdk/spdk0 00:36:23.663 Removing: /var/run/dpdk/spdk1 00:36:23.663 Removing: /var/run/dpdk/spdk2 00:36:23.663 Removing: /var/run/dpdk/spdk3 00:36:23.663 Removing: /var/run/dpdk/spdk4 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1902563 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1903635 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1904931 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1905485 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1906459 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1906671 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1908034 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1908061 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1908394 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1910130 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1911603 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1911929 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1912217 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1912416 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1912820 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1913026 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1913194 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1913512 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1914356 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1917353 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1917492 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1917649 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1917727 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1918154 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1918315 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1918649 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1918872 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1919080 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1919136 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1919392 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1919406 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1919965 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1920165 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1920520 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1924234 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1928729 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1938776 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1939470 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1943742 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1943997 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1948488 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1954888 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1957490 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1967709 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1976851 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1978474 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1979426 00:36:23.663 Removing: /var/run/dpdk/spdk_pid1996283 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2000481 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2046579 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2052481 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2058245 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2064220 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2064265 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2065017 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2065886 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2066802 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2067272 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2067459 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2067718 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2067738 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2067741 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2068651 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2069561 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2070482 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2070951 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2070954 00:36:23.663 Removing: /var/run/dpdk/spdk_pid2071217 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2072394 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2073403 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2081488 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2110402 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2114956 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2116718 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2118360 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2118587 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2118775 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2118841 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2119338 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2121172 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2121939 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2122460 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2125269 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2125679 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2126263 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2130535 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2135924 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2135925 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2135926 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2139703 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2148052 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2152087 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2158066 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2159387 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2160713 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2162039 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2166794 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2170921 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2178891 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2178932 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2183431 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2183660 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2183894 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2184347 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2184352 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2188837 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2189408 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2193975 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2196510 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2201910 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2207236 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2215948 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2223478 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2223517 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2242108 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2242588 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2243207 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2243746 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2244494 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2244996 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2245653 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2246128 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2250336 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2250615 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2256636 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2256742 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2262198 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2266272 00:36:23.922 Removing: /var/run/dpdk/spdk_pid2276696 00:36:23.923 Removing: /var/run/dpdk/spdk_pid2277375 00:36:23.923 Removing: /var/run/dpdk/spdk_pid2281530 00:36:23.923 Removing: /var/run/dpdk/spdk_pid2281841 00:36:23.923 Removing: /var/run/dpdk/spdk_pid2286061 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2291757 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2294155 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2304076 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2312941 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2314692 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2315990 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2332198 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2336165 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2338858 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2346836 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2346844 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2352088 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2354046 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2355986 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2357078 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2359170 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2360637 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2369389 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2369863 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2370528 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2372803 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2373277 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2373830 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2377772 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2377782 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2379298 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2379847 00:36:24.182 Removing: /var/run/dpdk/spdk_pid2379864 00:36:24.182 Clean 00:36:24.182 19:43:47 -- common/autotest_common.sh@1451 -- # return 0 00:36:24.182 19:43:47 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:36:24.182 19:43:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:24.182 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:36:24.182 19:43:47 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:36:24.182 19:43:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:24.182 19:43:47 -- common/autotest_common.sh@10 -- # set +x 00:36:24.441 19:43:47 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:24.441 19:43:47 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:24.441 19:43:47 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:24.441 19:43:47 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:36:24.441 19:43:47 -- spdk/autotest.sh@394 -- # hostname 00:36:24.441 19:43:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:24.441 geninfo: WARNING: invalid characters removed from testname! 00:36:46.375 19:44:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:47.312 19:44:11 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:49.216 19:44:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:51.121 19:44:14 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:53.036 19:44:16 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:54.940 19:44:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:56.849 19:44:20 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:56.849 19:44:20 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:36:56.850 19:44:20 -- common/autotest_common.sh@1691 -- $ lcov --version 00:36:56.850 19:44:20 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:36:56.850 19:44:20 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:36:56.850 19:44:20 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:36:56.850 19:44:20 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:36:56.850 19:44:20 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:36:56.850 19:44:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:36:56.850 19:44:20 -- scripts/common.sh@336 -- $ read -ra ver1 00:36:56.850 19:44:20 -- scripts/common.sh@337 -- $ IFS=.-: 00:36:56.850 19:44:20 -- scripts/common.sh@337 -- $ read -ra ver2 00:36:56.850 19:44:20 -- scripts/common.sh@338 -- $ local 'op=<' 00:36:56.850 19:44:20 -- scripts/common.sh@340 -- $ ver1_l=2 00:36:56.850 19:44:20 -- scripts/common.sh@341 -- $ ver2_l=1 00:36:56.850 19:44:20 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:36:56.850 19:44:20 -- scripts/common.sh@344 -- $ case "$op" in 00:36:56.850 19:44:20 -- scripts/common.sh@345 -- $ : 1 00:36:56.850 19:44:20 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:36:56.850 19:44:20 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:56.850 19:44:20 -- scripts/common.sh@365 -- $ decimal 1 00:36:56.850 19:44:20 -- scripts/common.sh@353 -- $ local d=1 00:36:56.850 19:44:20 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:36:56.850 19:44:20 -- scripts/common.sh@355 -- $ echo 1 00:36:56.850 19:44:20 -- scripts/common.sh@365 -- $ ver1[v]=1 00:36:56.850 19:44:20 -- scripts/common.sh@366 -- $ decimal 2 00:36:56.850 19:44:20 -- scripts/common.sh@353 -- $ local d=2 00:36:56.850 19:44:20 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:36:56.850 19:44:20 -- scripts/common.sh@355 -- $ echo 2 00:36:56.850 19:44:20 -- scripts/common.sh@366 -- $ ver2[v]=2 00:36:56.850 19:44:20 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:36:56.850 19:44:20 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:36:56.850 19:44:20 -- scripts/common.sh@368 -- $ return 0 00:36:56.850 19:44:20 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:56.850 19:44:20 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:36:56.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.850 --rc genhtml_branch_coverage=1 00:36:56.850 --rc genhtml_function_coverage=1 00:36:56.850 --rc genhtml_legend=1 00:36:56.850 --rc geninfo_all_blocks=1 00:36:56.850 --rc geninfo_unexecuted_blocks=1 00:36:56.850 00:36:56.850 ' 00:36:56.850 19:44:20 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:36:56.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.850 --rc genhtml_branch_coverage=1 00:36:56.850 --rc genhtml_function_coverage=1 00:36:56.850 --rc genhtml_legend=1 00:36:56.850 --rc geninfo_all_blocks=1 00:36:56.850 --rc geninfo_unexecuted_blocks=1 00:36:56.850 00:36:56.850 ' 00:36:56.850 19:44:20 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:36:56.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.850 --rc genhtml_branch_coverage=1 00:36:56.850 --rc genhtml_function_coverage=1 00:36:56.850 --rc genhtml_legend=1 00:36:56.850 --rc geninfo_all_blocks=1 00:36:56.850 --rc geninfo_unexecuted_blocks=1 00:36:56.850 00:36:56.850 ' 00:36:56.850 19:44:20 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:36:56.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:56.850 --rc genhtml_branch_coverage=1 00:36:56.850 --rc genhtml_function_coverage=1 00:36:56.850 --rc genhtml_legend=1 00:36:56.850 --rc geninfo_all_blocks=1 00:36:56.850 --rc geninfo_unexecuted_blocks=1 00:36:56.850 00:36:56.850 ' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:56.850 19:44:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:36:56.850 19:44:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:36:56.850 19:44:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:56.850 19:44:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:56.850 19:44:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.850 19:44:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.850 19:44:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.850 19:44:20 -- paths/export.sh@5 -- $ export PATH 00:36:56.850 19:44:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:56.850 19:44:20 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:36:56.850 19:44:20 -- common/autobuild_common.sh@486 -- $ date +%s 00:36:56.850 19:44:20 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729187060.XXXXXX 00:36:56.850 19:44:20 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729187060.xDeQoY 00:36:56.850 19:44:20 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:36:56.850 19:44:20 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@502 -- $ get_config_params 00:36:56.850 19:44:20 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:36:56.850 19:44:20 -- common/autotest_common.sh@10 -- $ set +x 00:36:56.850 19:44:20 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:36:56.850 19:44:20 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:36:56.850 19:44:20 -- pm/common@17 -- $ local monitor 00:36:56.850 19:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.850 19:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.850 19:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.850 19:44:20 -- pm/common@21 -- $ date +%s 00:36:56.850 19:44:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:56.850 19:44:20 -- pm/common@21 -- $ date +%s 00:36:56.850 19:44:20 -- pm/common@25 -- $ sleep 1 00:36:56.850 19:44:20 -- pm/common@21 -- $ date +%s 00:36:56.850 19:44:20 -- pm/common@21 -- $ date +%s 00:36:56.850 19:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729187060 00:36:56.850 19:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729187060 00:36:56.850 19:44:20 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729187060 00:36:56.850 19:44:20 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729187060 00:36:56.850 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729187060_collect-cpu-load.pm.log 00:36:56.850 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729187060_collect-vmstat.pm.log 00:36:56.850 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729187060_collect-cpu-temp.pm.log 00:36:56.850 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729187060_collect-bmc-pm.bmc.pm.log 00:36:57.789 19:44:21 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:36:57.789 19:44:21 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:36:57.789 19:44:21 -- spdk/autopackage.sh@14 -- $ timing_finish 00:36:57.789 19:44:21 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:57.789 19:44:21 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:57.789 19:44:21 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:57.789 19:44:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:36:57.789 19:44:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:36:57.789 19:44:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:36:57.789 19:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.789 19:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:36:57.789 19:44:21 -- pm/common@44 -- $ pid=2390519 00:36:57.789 19:44:21 -- pm/common@50 -- $ kill -TERM 2390519 00:36:57.789 19:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.789 19:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:36:57.790 19:44:21 -- pm/common@44 -- $ pid=2390520 00:36:57.790 19:44:21 -- pm/common@50 -- $ kill -TERM 2390520 00:36:57.790 19:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.790 19:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:36:57.790 19:44:21 -- pm/common@44 -- $ pid=2390523 00:36:57.790 19:44:21 -- pm/common@50 -- $ kill -TERM 2390523 00:36:57.790 19:44:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:36:57.790 19:44:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:36:57.790 19:44:21 -- pm/common@44 -- $ pid=2390545 00:36:57.790 19:44:21 -- pm/common@50 -- $ sudo -E kill -TERM 2390545 00:36:57.790 + [[ -n 1826097 ]] 00:36:57.790 + sudo kill 1826097 00:36:57.799 [Pipeline] } 00:36:57.814 [Pipeline] // stage 00:36:57.820 [Pipeline] } 00:36:57.834 [Pipeline] // timeout 00:36:57.839 [Pipeline] } 00:36:57.853 [Pipeline] // catchError 00:36:57.858 [Pipeline] } 00:36:57.873 [Pipeline] // wrap 00:36:57.879 [Pipeline] } 00:36:57.892 [Pipeline] // catchError 00:36:57.903 [Pipeline] stage 00:36:57.905 [Pipeline] { (Epilogue) 00:36:57.919 [Pipeline] catchError 00:36:57.920 [Pipeline] { 00:36:57.934 [Pipeline] echo 00:36:57.935 Cleanup processes 00:36:57.943 [Pipeline] sh 00:36:58.228 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.228 2390692 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:36:58.228 2391019 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.242 [Pipeline] sh 00:36:58.525 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:58.525 ++ grep -v 'sudo pgrep' 00:36:58.525 ++ awk '{print $1}' 00:36:58.525 + sudo kill -9 2390692 00:36:58.539 [Pipeline] sh 00:36:58.825 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:11.049 [Pipeline] sh 00:37:11.334 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:11.334 Artifacts sizes are good 00:37:11.351 [Pipeline] archiveArtifacts 00:37:11.358 Archiving artifacts 00:37:11.479 [Pipeline] sh 00:37:11.816 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:11.839 [Pipeline] cleanWs 00:37:11.862 [WS-CLEANUP] Deleting project workspace... 00:37:11.862 [WS-CLEANUP] Deferred wipeout is used... 00:37:11.891 [WS-CLEANUP] done 00:37:11.893 [Pipeline] } 00:37:11.909 [Pipeline] // catchError 00:37:11.919 [Pipeline] sh 00:37:12.200 + logger -p user.info -t JENKINS-CI 00:37:12.209 [Pipeline] } 00:37:12.222 [Pipeline] // stage 00:37:12.226 [Pipeline] } 00:37:12.240 [Pipeline] // node 00:37:12.244 [Pipeline] End of Pipeline 00:37:12.283 Finished: SUCCESS